Difference between revisions of "InfluxDB"

From air
Jump to navigation Jump to search
 
(37 intermediate revisions by the same user not shown)
Line 1: Line 1:
  +
La version courante est la version 0.10
[[Image:InfluxDB-WebUI.png|400px|thumb|right|InfluxDB WebUI]]
 
  +
  +
Vers la page de la version 0.8 [[InfluxDB 0.8]]
  +
   
 
http://influxdb.com/
 
http://influxdb.com/
Line 5: Line 8:
 
https://github.com/influxdb/influxdb
 
https://github.com/influxdb/influxdb
   
open-source distributed time series database with no external dependencies.
+
open-source distributed time series database for storing metrics, events, and analytics.
 
(metrics, events, and analytics)
 
   
 
developped in [[Go]] language
 
developped in [[Go]] language
  +
  +
* retention policy : data is automatically deleted after the retention period
  +
* sharding :
  +
* replication :
   
 
Voir https://speakerdeck.com/pauldix/introducing-influxdb-an-open-source-distributed-time-series-database
 
Voir https://speakerdeck.com/pauldix/introducing-influxdb-an-open-source-distributed-time-series-database
Line 25: Line 30:
 
<pre>
 
<pre>
 
# for 64-bit systems
 
# for 64-bit systems
wget http://s3.amazonaws.com/influxdb/influxdb_latest_amd64.deb
+
wget https://dl.influxdata.com/influxdb/releases/influxdb_0.13.0_amd64.deb
sudo dpkg -i influxdb_latest_amd64.deb
+
sudo dpkg -i influxdb_0.13.0_amd64.deb
  +
sudo service influxdb start
 
  +
sudo service influxdb status
# for 32-bit systems
 
wget http://s3.amazonaws.com/influxdb/influxdb_latest_i686.deb
 
sudo dpkg -i influxdb_latest_i686.deb
 
 
</pre>
 
</pre>
   
Line 38: Line 41:
 
brew install influxdb
 
brew install influxdb
 
</pre>
 
</pre>
  +
  +
==With [[Docker]]==
  +
See https://hub.docker.com/r/tutum/influxdb/
   
 
=Démarrage=
 
=Démarrage=
<pre>
 
sudo /etc/init.d/influxdb start
 
</pre>
 
   
  +
==On Linux Debian==
Naviguez sur http://localhost:8083/
 
username : root & password : root
 
 
Pensez à changer ces valeurs dans /usr/local/etc/influxdb.conf quand vous mettez influxdb en production
 
 
=Premiers Pas=
 
 
==Via l'interface Web==
 
Depuis l'interface web http://localhost:8083/, Créez une base : mydb
 
 
Depuis ''Database > Explore Data''
 
   
  +
Configurer Influxdb si besoin en éditant /etc/influxdb/influxdb.conf
Depuis ''write point'', Ajoutez un point dans la série temporelle ''log_lines''
 
   
 
<pre>
 
<pre>
  +
sudo service influxdb start
{ "line":"here's some useful log info from paul@influx.com", "like":1, "star": 5 }
 
  +
ps wwwax | grep influxdb
</pre>
 
puis un autre
 
<pre>
 
{ "line":"here's another useful log info from paul@influx.com", "like":2, "star": 4 }
 
</pre>
 
puis un autre
 
<pre>
 
{ "line":"here's some useful log info from didier@donsez.com", "like":1, "star": 5 }
 
 
</pre>
 
</pre>
   
  +
L'arrêt se fait avec
Exécutez les requêtes suivantes depuis ''read point'':
 
 
 
La liste des séries temporelles
 
 
<pre>
 
<pre>
  +
sudo service influxdb stop
list series
 
</pre>
 
<pre>
 
select * from /.*/ limit 1
 
 
</pre>
 
</pre>
   
  +
==On MacOS X==
   
Les points de la série log_lines
 
 
<pre>
 
<pre>
  +
sudo /etc/init.d/influxdb start
select line from log_lines
 
 
</pre>
 
</pre>
   
  +
Naviguez sur http://localhost:8083/
<pre>
 
  +
username : root & password : root
select like, star from log_lines
 
</pre>
 
   
  +
Pensez à changer ces valeurs dans /usr/local/etc/influxdb.conf quand vous mettez influxdb en production
   
  +
Le fichier de configuration peut être généré au moyen de :
Les deux points les plus récents
 
 
<pre>
 
<pre>
  +
/opt/influxdb/influxd config > /etc/influxdb/influxdb.generated.conf
select * from log_lines limit 2;
 
 
</pre>
 
</pre>
   
  +
Remarque: les interfaces pour UDP, Graphite, ... peuvent être activées.
Les points de la série log_lines dont la colonne line contient paul@influx.com
 
<pre>
 
select line from log_lines where line =~ /paul@influx.com/
 
</pre>
 
   
  +
'''Remarque: en production, il convient de sécuriser les ports (ie HTTPS, ...) si le serveur n'est pas derrière un firewall'''.
Les points de la série log_lines dont la colonne like est supérieure à 1
 
<pre>
 
select line from log_lines where like > 1
 
</pre>
 
   
  +
=[https://influxdb.com/docs/v0.9/concepts/schema_and_data_layout.html Schema Design]=
Les points de la série log_lines des 24 dernières heures
 
  +
Le schéma de la base est constituée de time series contenant des datapoints.
<pre>
 
select * from log_lines
 
where time > now() - 24h
 
</pre>
 
   
  +
Un datapoint est un n-uplets:
 
Agrégat temporel (les fonctions d'agrégat sont : count(), min(), max(), mean(), mode(), median(), distinct(), percentile(), histogram(), derivative(), sum(), stddev(), first(), last()).
 
 
<pre>
 
<pre>
  +
timeserie,tag=value,tag=value, measure=value,measure=value timestamp
select sum(like) as number_of_likes, mean(star) as mean_of_star from log_lines
 
group by time(1m)
 
where time > now() - 1d
 
</pre>
 
<pre>
 
select star, sum(like) as number_of_likes from log_lines
 
group by star, time(1m)
 
where time > now() - 1d
 
 
</pre>
 
</pre>
   
  +
Les datapoints sont indexés sur le timestamp (primaire) et sur les tags (secondaires)
Remplissage des intervalles vides de points (avec fill())
 
<pre>
 
select count(like) from events
 
group by time(1h) fill(0)
 
where time > now() - 24h
 
</pre>
 
   
  +
=Premiers Pas=
   
  +
==Via le shell==
Fusion (merge) de séries
 
 
<pre>
 
<pre>
  +
influx
select count(type) from user_events merge admin_events group by time(10m)
 
  +
influx -host localhost
 
</pre>
 
</pre>
   
  +
Créer la base
 
Attention, cette requête n'est pas un produit cartésien (SQL)
 
 
<pre>
 
<pre>
  +
CREATE DATABASE mydb
select * from log_lines, log_cpu;
 
  +
SHOW DATABASES
  +
USE mydb
 
</pre>
 
</pre>
   
  +
Insérer des points
Jointure entre 2 séries
 
 
<pre>
 
<pre>
  +
INSERT cpu,host=serverA,region=us_west value=0.64
select errors_per_minute.value / page_views_per_minute.value as error_rate_per_minute.
 
  +
INSERT cpu,host=serverA,region=us_east value=0.70
from errors_per_minute
 
  +
INSERT cpu,host=serverA,region=us_west value=0.70
inner join page_views_per_minute
 
  +
INSERT cpu,host=serverA,region=us_east value=0.90
</pre>
 
   
  +
INSERT payment,device=mobile,product=Notepad,method=credit billed=33,licenses=3i 1434067467100293230
Création d'un [http://influxdb.com/docs/v0.8/api/continuous_queries.html Requêtes continues]
 
<pre>
 
select count(star) from log_lines group by time(10m), star
 
into log_lines.count_per_star.10m
 
</pre>
 
   
  +
INSERT stock,symbol=AAPL bid=127.46,ask=127.48
<pre>
 
select * from log_lines.count_per_star.10m
 
</pre>
 
   
  +
INSERT temperature,machine=unit42,type=assembly external=25,internal=37 1434067467000000000
<pre>
 
drop continuous query <id>
 
 
</pre>
 
</pre>
   
  +
Consulter la base
 
 
Supprimez les points de toutes les séries temporelles dont la date est antérieure de 24 heures.
 
 
<pre>
 
<pre>
  +
show series
delete from /.*/ where time < now() - 24h
 
select * from /.*/
 
</pre>
 
   
  +
SELECT * FROM cpu
Suppression d'une série
 
  +
SELECT * FROM cpu WHERE value >= 0.7
<pre>
 
  +
SELECT * FROM cpu LIMIT 1
drop series log_lines
 
  +
SELECT value from cpu WHERE time > now() - 7d
</pre>
 
  +
SELECT mean(value) FROM cpu WHERE time > now() - 1h GROUP BY time(10m);
   
  +
SELECT * FROM temperature
==Via l'interface REST==
 
 
Execute a query (pretty=true --> the output is prettily formatted)
 
<pre>
 
curl -G 'http://localhost:8086/db/mydb/series?u=root&p=root&pretty=true' --data-urlencode "q=select * from log_lines"
 
 
</pre>
 
</pre>
   
  +
Consulter la base en continue (''[https://influxdb.com/docs/v0.9/query_language/continuous_queries.html Continuous Queries]'')
Write points ([http://influxdb.com/docs/v0.8/api/reading_and_writing_data.html#specifying-time-and-sequence-number-on-writes with timestamp and sequence numbers : the time_precision can be s, us, ms])
 
 
<pre>
 
<pre>
  +
TODO
curl -X POST -d @points.json 'http://localhost:8086/db/mydb/series?u=root&p=root&time_precision=ms'
 
</pre>
 
   
  +
# Create a CQ to sample the 95% value from 5 minute buckets of the response_times measurement
'''points.json'''
 
  +
CREATE CONTINUOUS QUERY response_times_percentile ON mydb BEGIN
<pre>
 
  +
SELECT percentile(value, 95) INTO "response_times.percentiles.5m.95" FROM response_times GROUP BY time(5m)
[
 
  +
END
{
 
"name": "log_lines",
 
"columns": ["time", "sequence_number", "line"],
 
"points": [
 
[1400425947368, 1, "this line is first"],
 
[1400425947368, 2, "and this is second"],
 
[1400425948368, 3, "and this is third"]
 
]
 
}
 
]
 
 
</pre>
 
</pre>
   
   
  +
Lister et supprimer des requêtes en continu
===Administration===
 
create a database
 
 
<pre>
 
<pre>
  +
SHOW CONTINUOUS QUERIES
curl -X POST 'http://localhost:8086/db?u=root&p=root' -d '{"name": "site_development"}'
 
</pre>
 
   
  +
DROP CONTINUOUS QUERY response_times_percentile ON mydb
drop a database
 
<pre>
 
curl -X DELETE 'http://localhost:8086/db/site_development?u=root&p=root'
 
 
</pre>
 
</pre>
   
  +
Changer la rétention de la base mydb (ie les données ne sont conservées qu'un jour)
Autre : http://influxdb.com/docs/v0.8/api/administration.html
 
 
==Dashboard avec Influga==
 
[[Image:Influga.png|400px|thumb|right|Influga]]
 
[https://github.com/hakobera/influga Influga] is a InfluxDB Dashboard and Graph Editor.
 
 
Installez influga
 
 
<pre>
 
<pre>
  +
ALTER RETENTION POLICY monitor ON mydb DURATION 1d REPLICATION 1 DEFAULT
sudo npm install influx
 
sudo npm install -g influga
 
 
</pre>
 
</pre>
   
Editez la configuration dans influga-config.json
 
<pre>
 
{
 
"dashboardDbPath": "./db/influga.db",
 
"host": "localhost",
 
"port": 8086,
 
"database": "mydb",
 
"username": "root",
 
"password": "root"
 
}
 
</pre>
 
   
  +
Consulter les statistiques
Lancez le service
 
 
<pre>
 
<pre>
  +
SHOW STATS
influga start -c influga-config.json
 
 
</pre>
 
</pre>
   
   
  +
Consulter le diagnostique
Naviguez sur http://localhost:8089/ et configurez le panel avec
 
 
<pre>
 
<pre>
  +
SHOW DIAGNOSTICS
select star, like from log_lines
 
 
</pre>
 
</pre>
   
  +
Consulter le cluster
==Dashboard avec [[Grafana]]==
 
http://influxdb.com/docs/v0.7/ui/grafana.html
 
 
Téléchargez Grafana
 
 
Modifiez config.example.js avec
 
 
<pre>
 
<pre>
  +
SHOW SERVERS
datasources: {
 
'eu-metrics': {
 
type: 'influxdb',
 
url: 'http://localhost:8086/db/mydb',
 
username: 'test',
 
password: 'test',
 
},
 
'grafana': {
 
type: 'influxdb',
 
url: 'http://localhost:8086/db/grafana',
 
username: 'test',
 
password: 'test',
 
grafanaDB: true
 
},
 
},
 
 
</pre>
 
</pre>
   
et Sauvegardez dans config.js
 
   
  +
Consulter la base interne '''_internal'''
Ouvrez index.html
 
 
=Premières Requêtes=
 
 
 
==[[Node.js]]==
 
* https://github.com/bencevans/node-influx
 
 
Installez
 
 
<pre>
 
<pre>
  +
USE _internal
sudo npm install influx
 
  +
show measurements
  +
show series limit 10
  +
select * from httpd limit 10
  +
select * from runtime limit 10
 
</pre>
 
</pre>
   
  +
==Via l'interface REST==
Editez le programme testinflux.js
 
<pre>
 
var influx = require('influx');
 
   
console.log("Connect to database");
 
var username = 'root';
 
var password = 'root';
 
var database = 'mydb';
 
 
var dbInflux = influx({host : 'localhost', username : username, password : password, database : database});
 
 
/*
 
console.log("Create database");
 
dbInflux.createDatabase('mydb', function(err) {
 
if(err) throw err;
 
console.log('Database Created');
 
});
 
*/
 
 
console.log("Write points");
 
 
var i=10;
 
while(i--) {
 
dbInflux.writePoint('log_lines', { line: 'Yo', like: Math.random() * 10, star: Math.random() * 5 }, function(err) {
 
if(err) throw err;
 
});
 
console.log("+");
 
 
}
 
 
console.log("Query series");
 
 
var query = 'SELECT * FROM log_lines WHERE time > now() - 24h';
 
dbInflux.query(query, function(err, body) {
 
if(err!=null) throw err;
 
console.log(JSON.stringify(body[0].columns, null, '\t'));
 
console.log(JSON.stringify(body[0].points, null, '\t'));
 
});
 
 
</pre>
 
 
Lancez le programme
 
 
<pre>
 
<pre>
  +
curl -G 'http://localhost:8086/query?pretty=true' --data-urlencode "db=mydb" --data-urlencode \
node testinflux.js
 
  +
"q=SELECT value FROM cpu WHERE region='us_west'"
 
</pre>
 
</pre>
   
  +
Requêtes multiples
==[[PubNub]] to InfluxDB==
 
 
Créez un compte gratuit sur http://pubnub.com et renseignez les clés récues par mail
 
(les clés demo & demo fonctionnent quand même)
 
 
Installez
 
 
<pre>
 
<pre>
  +
curl -G 'http://localhost:8086/query?pretty=true' --data-urlencode "db=mydb" --data-urlencode \
sudo npm install influx
 
  +
"q=SELECT value FROM cpu WHERE region='us_west';SELECT count(value) FROM cpu WHERE region='us_west'"
npm install pubnub
 
 
</pre>
 
</pre>
   
  +
Affichage en seconde
 
Lancez le programme ''node pubnub2influx.js'' dans un terminal
 
 
<pre>
 
<pre>
  +
curl -G 'http://localhost:8086/query?pretty=true' --data-urlencode "db=mydb" \
 
  +
--data-urlencode "epoch=s" --data-urlencode "q=SELECT value FROM cpu WHERE region='us_west'"
var influx = require('influx');
 
 
/*
 
var pubnub = require("pubnub").init({
 
publish_key: 'your publish key',
 
subscribe_key: 'your subscribe key'
 
});
 
*/
 
 
var pubnub = require("pubnub").init({
 
publish_key: 'demo',
 
subscribe_key: 'demo'
 
});
 
 
var channel = 'test/influxdb';
 
 
var dbhost = 'localhost';
 
var database = 'mydb';
 
var username = 'root';
 
var password = 'root';
 
var timeSerie = 'pubnub_test_influxdb';
 
 
var dbInflux = influx({host : dbhost, username : username, password : password, database : database});
 
 
pubnub.subscribe({
 
channel : channel,
 
callback : function(message) {
 
console.log( " > ", message );
 
dbInflux.writePoint(timeSerie,message,function(err) {
 
if(err) throw err;
 
});
 
}
 
});
 
 
console.log("PubNub to InfluxDB bridge is started");
 
 
</pre>
 
</pre>
   
  +
Affichage d'un 'chunk' de 20000
Lancez le programme ''node pubnub-pub.js'' dans un autre terminal
 
<pre>
+
<pre>curl
  +
curl -G 'http://localhost:8086/query?pretty=true' --data-urlencode "db=mydb" \
/*
 
  +
--data-urlencode "chunk_size=20000" --data-urlencode "q=SELECT * FROM cpu"
var pubnub = require("pubnub").init({
 
publish_key: 'your publish key',
 
subscribe_key: 'your subscribe key'
 
});
 
*/
 
 
var pubnub = require("pubnub").init({
 
publish_key: 'demo',
 
subscribe_key: 'demo'
 
});
 
 
var channel = 'test/influxdb';
 
 
function publish() {
 
var message= { line: 'Yo', like: Math.random() * 10, star: Math.random() * 5 };
 
pubnub.publish({
 
channel : channel,
 
message : message,
 
callback : function(e) { console.log( "SUCCESS!", e ); },
 
error : function(e) { console.log( "FAILED! RETRY PUBLISH!", e ); }
 
});
 
setTimeout(publish, 500);
 
}
 
 
publish();
 
 
console.log("PubNub publishing is started");
 
 
</pre>
 
</pre>
   
  +
==Via l'interface Web==
Naviguez dans la time serie ''pubnub_test_influxdb'' de la base ''mydb'' avec la requête
 
  +
Depuis l'interface web http://localhost:8083/, Créez une base : mydb
<pre>
 
SELECT star FROM pubnub_test_influxdb
 
</pre>
 
   
  +
=Clustering & Replication=
==[[Node.js]] et [[MQTT]]==
 
  +
Influxdb utilise le protocole [[Raft]] pour l'élection d'un leader : il faut configurer au minimum 3 machines avec des noeuds '''meta'''. Remarque : une machine pour être un noeud '''meta''' seul, un noeud '''data''' seul, ou bien les deux.
[[Image:InfluxDB-WebUI.png|400px|thumb|right|InfluxDB WebUI]]
 
  +
Le nombre de noeuds '''data''' dépend du niveau de réplication désiré.
   
Installez
 
<pre>
 
sudo npm install influx
 
sudo npm install mqtt
 
</pre>
 
   
  +
https://docs.influxdata.com/influxdb/v0.10/guides/clustering/
   
  +
Vérifier la configuration sous le shell influx
Lancez le programme ''node mqtt2influx.js'' dans un terminal
 
<pre>
 
var mqtt = require('mqtt')
 
var influx = require('influx');
 
   
var broker = 'test.mosquitto.org';
 
var port = 1883;
 
var topic = 'test/influxdb';
 
 
 
var dbhost = 'localhost';
 
var database = 'mydb';
 
var username = 'root';
 
var password = 'root';
 
var timeSerie = 'mqtt_test_influxdb';
 
 
var dbInflux = influx({host : dbhost, username : username, password : password, database : database});
 
 
client = mqtt.createClient(port, broker);
 
 
client.subscribe(topic).on('message', function (topic, message) {
 
console.log("<");
 
dbInflux.writePoint(timeSerie,JSON.parse(message),function(err) {
 
if(err) throw err;
 
});
 
});
 
 
console.log("MQTT to InfluxDB bridge is started");
 
</pre>
 
 
Lancez le programme ''node mqttpub.js'' dans un autre terminal
 
 
<pre>
 
<pre>
  +
SHOW SERVERS
var mqtt = require('mqtt')
 
 
var broker = 'test.mosquitto.org';
 
var port = 1883;
 
var topic = 'test/influxdb';
 
 
client = mqtt.createClient(port, broker);
 
 
client.subscribe(topic);
 
 
function publish() {
 
var message= { line: 'Yo', like: Math.random() * 10, star: Math.random() * 5 };
 
client.publish(topic, JSON.stringify(message));
 
console.log(">");
 
setTimeout(publish, 500);
 
}
 
 
publish();
 
 
console.log("MQTT publishing is started");
 
 
</pre>
 
</pre>
   
  +
=Backup=
Naviguez dans la time serie ''mqtt_test_influxdb'' de la base ''mydb'' avec la requête
 
  +
[https://docs.influxdata.com/influxdb/v0.10/administration/backup_and_restore/ Manuel]
<pre>
 
SELECT mean(star) as Mean_Star FROM mqtt_test_influxdb GROUP BY time(10s)
 
</pre>
 
 
 
==[[Apache Kafka]] to InfluxDB==
 
 
TODO
 
 
https://www.npmjs.org/package/kafka-node
 
 
==[[STOMP]] to InfluxDB==
 
TODO
 
 
https://github.com/benjaminws/stomp-js
 
 
==[[Octoblu]] to InfluDB==
 
TODO
 
   
 
<pre>
 
<pre>
  +
influxd backup -database telegraf -retention default -since 2016-02-01T00:00:00Z /tmp/backup/telegraf-2016-02-01.db.backup
npm install skynet
 
npm install request
 
 
</pre>
 
</pre>
   
  +
=Restore=
 
<pre>
 
<pre>
  +
sudo service influxdb stop
 
  +
influxd restore -database telegraf -datadir /var/lib/influxdb/data /tmp/backup/telegraf-2016-02-01.db.backup
 
  +
sudo service influxdb start
 
  +
influx -execute 'show databases'
 
</pre>
 
</pre>
   
  +
==Projets connexes==
==[[Node-RED]]==
 
  +
==[[Telegraf]]==
* https://github.com/node-red/node-red/issues/462
 
  +
https://github.com/influxdb/telegraf
 
  +
''agent written in Go for collecting metrics from the system it's running on, or from other services, and writing them into InfluxDB. ''
==Java==
 
 
<pre>
 
git clone https://github.com/influxdb/influxdb-java.git
 
cd influxdb-java-master
 
mvn clean install -DskipTests=true
 
# mvn clean install
 
 
 
 
mvn archetype:generate -DgroupId=org.influxdb -DartifactId=influxdb-example -DarchetypeArtifactId=maven-archetype-quickstart -DinteractiveMode=false
 
</pre>
 
 
 
See https://github.com/influxdb/influxdb-java/blob/master/src/test/java/org/influxdb/InfluxDBTest.java
 
 
<pre>
 
String ip="localhost";
 
 
this.influxDB = InfluxDBFactory.connect("http://" + ip + ":8086", "root", "root");
 
boolean influxDBstarted = false;
 
do {
 
Pong response;
 
try {
 
response = this.influxDB.ping();
 
if (response.getStatus().equalsIgnoreCase("ok")) {
 
influxDBstarted = true;
 
}
 
} catch (Exception e) {
 
// NOOP intentional
 
}
 
Thread.sleep(100L);
 
} while (!influxDBstarted);
 
this.influxDB.setLogLevel(LogLevel.FULL);
 
 
String logs = CharStreams.toString(new InputStreamReader(containerLogsStream, Charsets.UTF_8));
 
 
 
String dbName = "writeseriebuilder-unittest-" + System.currentTimeMillis();
 
this.influxDB.createDatabase(dbName);
 
int outer = 20;
 
List<Serie> series = Lists.newArrayList();
 
for (int i = 0; i < outer; i++) {
 
Serie serie = new Serie.Builder("serieFromBuilder")
 
.columns("column1", "column2")
 
.values(System.currentTimeMillis(), 1)
 
.values(System.currentTimeMillis(), 2)
 
.values(System.currentTimeMillis(), 3)
 
.values(System.currentTimeMillis(), 4)
 
.values(System.currentTimeMillis(), 5)
 
.values(System.currentTimeMillis(), 6)
 
.values(System.currentTimeMillis(), 7)
 
.values(System.currentTimeMillis(), 8)
 
.values(System.currentTimeMillis(), 9)
 
.values(System.currentTimeMillis(), 10)
 
.build();
 
series.add(serie);
 
}
 
this.influxDB.write(dbName, TimeUnit.MILLISECONDS, series.toArray(new Serie[0]));
 
 
 
Serie serie = new Serie.Builder("testSeries")
 
.columns("value1", "value2")
 
.values(System.currentTimeMillis(), 5)
 
.build();
 
this.influxDB.write(dbName, TimeUnit.MILLISECONDS, serie);
 
List<Serie> result = this.influxDB.query(dbName, "select value2 from testSeries", TimeUnit.MILLISECONDS);
 
 
this.influxDB.deleteDatabase(dbName);
 
 
</pre>
 
   
  +
==[[Chronograf]]==
  +
''Chronograf is a single binary web application that you can deploy behind your firewall to do ad hoc exploration of your time series data in InfluxDB.''
  +
([https://influxdata.com/time-series-platform/chronograf/ lien])
   
  +
Installation et utilisation détaillées [[Chronograf|ici]].
==[[OpenHAB]]==
 
The states of an item are persisted in time series with names equal to the name of the item. All values are stored in a field called "value" using integers or doubles, OnOffType and OpenClosedType values are stored using 0 or 1. The times for the entries are calculated by InfluxDB.
 
   
  +
==[[Kapacitor]]==
* https://github.com/openhab/openhab/wiki/InfluxDB-Persistence
 
  +
Kapacitor is a data processing engine. It can process both stream (subscribe realtime) and batch (bulk query) data from InfluxDB. Kapacitor lets you define custom logic to process alerts with dynamic thresholds, match metrics for patterns, compute statistical anomalies, etc. ([https://influxdata.com/time-series-platform/kapacitor/ lien])
   
  +
=Extra=
=Benchmark=
 
  +
* [[Jmxtrans]]
* http://influxdb.com/blog/2014/06/20/leveldb_vs_rocksdb_vs_hyperleveldb_vs_lmdb_performance.html
 
  +
* [[Grafana]]
* https://github.com/influxdb/influxdb/tree/master/tools
 

Latest revision as of 17:46, 1 September 2016

La version courante est la version 0.10

Vers la page de la version 0.8 InfluxDB 0.8


http://influxdb.com/

https://github.com/influxdb/influxdb

open-source distributed time series database for storing metrics, events, and analytics.

developped in Go language

  • retention policy : data is automatically deleted after the retention period
  • sharding :
  • replication :

Voir https://speakerdeck.com/pauldix/introducing-influxdb-an-open-source-distributed-time-series-database

Demo sur http://play.influxdb.com/ (source code)

InfluxDB @ AIR

Installation

http://influxdb.com/download/

On Linux (Debian)

# for 64-bit systems
wget https://dl.influxdata.com/influxdb/releases/influxdb_0.13.0_amd64.deb
sudo dpkg -i influxdb_0.13.0_amd64.deb 
sudo service influxdb start
sudo service influxdb status

On OS X

brew update
brew install influxdb

With Docker

See https://hub.docker.com/r/tutum/influxdb/

Démarrage

On Linux Debian

Configurer Influxdb si besoin en éditant /etc/influxdb/influxdb.conf

sudo service influxdb start
ps wwwax | grep influxdb

L'arrêt se fait avec

sudo service influxdb stop

On MacOS X

sudo /etc/init.d/influxdb start

Naviguez sur http://localhost:8083/ username : root & password : root

Pensez à changer ces valeurs dans /usr/local/etc/influxdb.conf quand vous mettez influxdb en production

Le fichier de configuration peut être généré au moyen de :

/opt/influxdb/influxd config > /etc/influxdb/influxdb.generated.conf

Remarque: les interfaces pour UDP, Graphite, ... peuvent être activées.

Remarque: en production, il convient de sécuriser les ports (ie HTTPS, ...) si le serveur n'est pas derrière un firewall.

Schema Design

Le schéma de la base est constituée de time series contenant des datapoints.

Un datapoint est un n-uplets:

timeserie,tag=value,tag=value, measure=value,measure=value timestamp

Les datapoints sont indexés sur le timestamp (primaire) et sur les tags (secondaires)

Premiers Pas

Via le shell

influx
influx -host localhost

Créer la base

CREATE DATABASE mydb
SHOW DATABASES
USE mydb

Insérer des points

INSERT cpu,host=serverA,region=us_west value=0.64
INSERT cpu,host=serverA,region=us_east value=0.70
INSERT cpu,host=serverA,region=us_west value=0.70
INSERT cpu,host=serverA,region=us_east value=0.90

INSERT payment,device=mobile,product=Notepad,method=credit billed=33,licenses=3i 1434067467100293230

INSERT stock,symbol=AAPL bid=127.46,ask=127.48

INSERT temperature,machine=unit42,type=assembly external=25,internal=37 1434067467000000000

Consulter la base

show series

SELECT * FROM cpu
SELECT * FROM cpu WHERE value >= 0.7
SELECT * FROM cpu LIMIT 1
SELECT value from cpu WHERE time > now() - 7d
SELECT mean(value) FROM cpu WHERE time > now() - 1h GROUP BY time(10m);

SELECT * FROM temperature

Consulter la base en continue (Continuous Queries)

TODO

# Create a CQ to sample the 95% value from 5 minute buckets of the response_times measurement
CREATE CONTINUOUS QUERY response_times_percentile ON mydb BEGIN
  SELECT percentile(value, 95) INTO "response_times.percentiles.5m.95" FROM response_times GROUP BY time(5m)
END


Lister et supprimer des requêtes en continu

SHOW CONTINUOUS QUERIES

DROP CONTINUOUS QUERY response_times_percentile ON mydb

Changer la rétention de la base mydb (ie les données ne sont conservées qu'un jour)

ALTER RETENTION POLICY monitor ON mydb DURATION 1d REPLICATION 1 DEFAULT


Consulter les statistiques

SHOW STATS


Consulter le diagnostique

SHOW DIAGNOSTICS

Consulter le cluster

SHOW SERVERS


Consulter la base interne _internal

USE _internal
show measurements
show series limit 10
select * from httpd limit 10
select * from runtime limit 10

Via l'interface REST

curl -G 'http://localhost:8086/query?pretty=true' --data-urlencode "db=mydb" --data-urlencode \
 "q=SELECT value FROM cpu WHERE region='us_west'"

Requêtes multiples

curl -G 'http://localhost:8086/query?pretty=true' --data-urlencode "db=mydb" --data-urlencode \
 "q=SELECT value FROM cpu WHERE region='us_west';SELECT count(value) FROM cpu WHERE region='us_west'"

Affichage en seconde

curl -G 'http://localhost:8086/query?pretty=true' --data-urlencode "db=mydb" \
 --data-urlencode "epoch=s" --data-urlencode "q=SELECT value FROM cpu WHERE region='us_west'"

Affichage d'un 'chunk' de 20000

curl
curl -G 'http://localhost:8086/query?pretty=true' --data-urlencode "db=mydb" \
 --data-urlencode "chunk_size=20000" --data-urlencode "q=SELECT * FROM cpu"

Via l'interface Web

Depuis l'interface web http://localhost:8083/, Créez une base : mydb

Clustering & Replication

Influxdb utilise le protocole Raft pour l'élection d'un leader : il faut configurer au minimum 3 machines avec des noeuds meta. Remarque : une machine pour être un noeud meta seul, un noeud data seul, ou bien les deux. Le nombre de noeuds data dépend du niveau de réplication désiré.


https://docs.influxdata.com/influxdb/v0.10/guides/clustering/

Vérifier la configuration sous le shell influx

SHOW SERVERS

Backup

Manuel

influxd backup -database telegraf -retention default -since 2016-02-01T00:00:00Z /tmp/backup/telegraf-2016-02-01.db.backup

Restore

sudo service influxdb stop
influxd restore -database telegraf -datadir /var/lib/influxdb/data /tmp/backup/telegraf-2016-02-01.db.backup
sudo service influxdb start
influx -execute 'show databases'

Projets connexes

Telegraf

https://github.com/influxdb/telegraf agent written in Go for collecting metrics from the system it's running on, or from other services, and writing them into InfluxDB.

Chronograf

Chronograf is a single binary web application that you can deploy behind your firewall to do ad hoc exploration of your time series data in InfluxDB. (lien)

Installation et utilisation détaillées ici.

Kapacitor

Kapacitor is a data processing engine. It can process both stream (subscribe realtime) and batch (bulk query) data from InfluxDB. Kapacitor lets you define custom logic to process alerts with dynamic thresholds, match metrics for patterns, compute statistical anomalies, etc. (lien)

Extra