Mondial des Pinots 2014, notes de voyage

August 18, 2014 Leave a comment

 

Paysage valaisan

Paysage valaisan

Incroyable opportunité pour moi que de participer à ce concours, en tant que juré: merci à Tatiana, ma merveilleuse femme avec qui je me réjouis de passer 3 jours inoubliables.

En effet, le Mondial du Pinot c’est 3 journées, à Sierre en Suisse. Trois journées pour déguster et médailler des vins du monde entier ayant pour trait commun d’être élaborés à partir des différents cépages de la famille du Pinot.

Pinot Noir, Pinot Gris ou Pinot Blanc, autant de cépages à l’origine d’une grande variété de vins provenant aussi bien des pays producteurs de la “vieille Europe” que de ceux du nouveau monde.

Il me tarde d’y être, de découvrir de nouvelles saveurs, de nouveaux produits et d’agrandir ainsi ma connaissance de ce monde fascinant du vin.

 

Jeudi 14 août 2014 – Jour 1

 

11h, pour l’instant, je suis dans le train: TGV Lyria en direction de Bâle et 3 heures devant moi pour réfléchir et me reposer. Je me sens bien.

Je dois avouer que je suis quand même assez excité … l’idée d’être juré d’un concours international … peut-être même un peu stressé.

Je suis informaticien … pas du tout un professionnel de la filière du vin; un amateur passioné, averti et curieux, constamment à la recherche de nouvelles expériences: ça oui, mais pas plus! C’est donc en toute humilité que je vais participer à ce concours.

Soit dit en passant, j’ai quand même ma petite expérience : j’ai été juré au Concours du Vigneron Indépendant ces 2 dernières années. Et puis je me forme aussi: cours, formations professionnelles diplômantes (WSET), visites de producteurs et degustations. Et puis … bon j’arrête … je suis juste étonné d’être là, c’est tout.

Je voudrais écrire aussi quelques mots sur les Pinots et leurs vins … plus particulièrement sur ceux que j’aurai dégustés pendant le concours; mais ça ce sera pour un autre jour.

Il pleut dehors, le ciel est très gris et les fenêtres du train sont striées par de fines gouttes de pluie … un temps très automnal pour un weekend du 15 août.

14h, nous sommes à Bâle, le train pour Berne est annoncé avec un retard d’1/2 heure, un peu de stress mais la fréquence des trains en Suisse est top et nous arriverons quand même à l’heure pour la formation des dégustateurs.

17h, l’hôtel Terminus est bien et très proche de l’Hôtel de Ville de Sierre où auront lieu les dégustations. Petite anecdote à l’arrivée, la porte ne s’ouvre pas: on nous offre une boisson à la brasserie de l’hôtel mais après plus de 6 heures de voyage, nous voulons surtout avoir le temps de nous rafraîchir et de nous changer avant le rendez-vous de 17h30.

17h30, l’organisation semble parfaite dès le premier instant: bien accueillis, système informatique très facile à prendre en main pour noter les vins: nous pratiquons sur 4 vins: un Pinot Blanc, un Pinot Noir rosé (œil de perdrix) et 2 Pinots Noirs rouges.

18h30, direction le dîner au Château Mercier. L’endroit est magnifique à la fois le château et le site. Un apéro : je goûte un effervescent blanc du Valais (pas très à mon goût : trop sur le fruit, genre Asti Spumante mais quand même moins sucré) et je fais la connaissance d’un caviste suisse, d’un journaliste bourguignon et d’un italien de la vallée d’Aoste: la soirée s’annonce bien (il faut que je fasse un effort pour retenir les noms des gens … bien moi ça).

20h, le dîner est somptueux et les vins aussi:

Entrée : salade aux gambas accompagnée d’un rosé Oeil de Perdrix.

Plat: poularde en sauce avec riz et légumes accompagnée d’un Ermitage (je suis très agréablement surpris par le vin: une marsanne, avec des arômes oxydatifs et boisés qui se marie à merveille avec le plat).

Dessert: blanc-manger noix de coco et coulis d’abricot accompagné d’un vin blanc doux … je ne me rappelle plus ce que c’était mais le mariage était très bien.

Entre temps, on déguste aussi 2 rouges et un autre vin doux blanc produits par des membres du jury … mais ça fait trop pour le même jour : ma mémoire flanche.

22h, retour à pied, petite promenade digestive et au lit: demain on commence tôt !!

 

Vendredi 15 août 2014 – Jour 2

 

Le lendemain matin réveil un peu laborieux à 7h30, une bonne douche suffira à laver les excès de la veille. Petit déjeuner expédié vers 8h15 pour être à l’heure à 8h50 en salle de dégustation.

9h, la première séance de dégustations débute, la matinée ne fait que commencer: 42 échantillons en 3h30 avec une pause de 10 minutes tous les 14 échantillons environ. L’organisation est rodée, le service parfait; je suis impressionné.

15 effervescents blancs (Pinot Noir, Pinot Blanc ou Pinot Gris), 9 effervescents rosés (Pinot Noir) et 18 Pinots Noirs rouges.

Pas de médaille d’or pour les effervescents et les médailles d’argent sont loin de l’or: les évaluations sont assez consensuelles et nous sommes tous un peu déçus de la série.

2 d’entre nous auraient donné un or à un échantillon de style très champenois … qui se révelle a posteriori être de Champagne. Le mieux noté est un crémant d’Alsace. Beaucoup de pays sont représentés.

Sur les Pinots Noirs rouges nous sommes plus chanceux, la série est bonne: 3 médailles d’or et pas mal d’argents, les jugements sont à nouveau très consensuels. Les suisses sont donc très doués en Pinots Noirs rouges: 17 vins sur 18 sont suisses!

12h30, après l’effort le réconfort: un petit vers de Chasselas et nous partons visiter la cave des Bernunes et prendre un apéro-déjeuner debout. 2 vins nous seront proposés pour accompagner un buffet somptueux dans un cadre extraordinaire: un Chasselas et un Gamay … mais je ne suis pas éblouis par ces vins. A noter, le nom local du Chasselas est le Fendant.

Cave des Bernunes - Cuves

Cave des Bernunes – Cuves

14h30, après la visite des installations dans la cave nous repartons … le programme prévoit une visite guidée des tableaux du Château Mercier mais une partie d’entre nous est détournée par Madeleine Mercier, qui nous ouvre sa cave, nous la fait visiter et nous fait déguster 4 très jolis vins du Valais:

Une Petite Arvine 2013, très fruitée, avec une jolie fraîcheur provenant d’une belle acidité.

Un Savagnin 2013 (nom local le Païen), vieilli 6 mois en fût, qui associe un très joli fruit à une structure intéressante … le boisé peut-être un peu trop présent devrait se faire plus discret et élégant avec l’âge , selon notre aimable hôtesse.

Un Cornalin (Rouge du Pays) splendide, vinifié en partie en fût de chêne où il passe un an avant d’être mis en bouteille. C’est un 2012 avec un très beau fruit noir sur la cerise et le cassis, des épices et de la structure. Le bois est présent mais élégant, les tannins sont soyeux.

Une Syrah 2012, produite de la même manière que le Cornalin, elle est très réussie mais plus classique.

16h30, retour à l’hôtel à pied, il nous reste 3 petites heures pour nous reposer avant la raclette au Château Villa. Je dors 2 heures sans aucun mal!

19h15, départ de l’hôtel, après avoir demandé notre chemin au chauffeur de bus, nous décidons de monter à pied. 20 minutes plus tard nous sommes arrivés; le repas commencera vers 20 h.

20h, 5 raclettes différentes nous sont proposées qui proviennent de 5 vallées différentes du Valais. Elles seront accompagnées de 2 chasselas et d’un Pinot Noir Rouge. La tablée est sympathique et les sujets de discussion très variés : les AOC, le sport et la politique internationale nous occupent pas mal; le Pinot Noir me plait beaucoup, c’est un “Lucifer”, diaboliquement bon!

22h, retour à pied pour digérer et me voici dans mon lit rattrapant le retard de mes notes de voyage: je suis ravis de cette journée; au delà du concours, je fais l’expérience du vignoble valaisois et je rencontre plein de gens du secteur du vin: journalistes, producteurs, professionnels de la filière du vin et autres écrivains ou experts. Même une oenologue grecque avec qui j’ai une longue discussion sur le vignoble de Nemea : là aussi je me rends compte qu’il me faudra approfondir ma connaissance de ce vignoble que j’aime déjà tant (il faudra que je prenne contacte avec elle par mail pour qu’elle me redonne quelques adresses).

 

Samedi 16 août 2014 – Jour 3

 

Et rebelote ! Lever à 7h45, douche, petit déjeuner et reprises des dégustations à 9h pile.

9h, ce matin ce sera de nouveau 42 échantillons : 15 effervescents à base de Pinot Noir, Blanc ou Gris, suivis de 27 Pinots Noirs rouges (12 de 2013, 13 de 2011 et 2 de 2009). 

Par rapport au jour précédent les résultats sont inversés : nous avons de bons effervescents (2 ors) mais sommes déçus sur les rouges.

Quoi qu’il en soit notre jury fonctionne à merveille:

  • nous sommes d’accord sur la plupart des vins,
  • nous sommes capables aussi bien de noter sévèrement les vins qui nous déplaisent que de récompenser largement ceux qui nous plaisent,
  • nous ne mettons pas nos personnalités de côté et avons sur certains vins de fortes disparités : jusqu’à 10 points d’écart sur un ou deux vins!

12h30, comme d’habitude un apéro nous attend après les dégustations : un chasselas au soleil qui n’est pas désagréable, nous le dégustons en bonne compagnie!

13h, le départ est donné, nous décidons de monter à pied avec Tatiana; le temps s’y prête parfaitement. Déjeuner debout au château Mercier, le buffet est splendide et intégralement fait par le cuisinier du château : chapeaux bas! Le cadre est toujours aussi beau et le temps est de notre côté.

Parmis la grande variété de vins proposés, j’opte pour des suisses rouges sur des cépages typiques du canton: Cornalin, Gamaret, Humagne Rouge et pour finir un vin doux avec le dessert: une magnifique Marsanne sucrée et aromatique.

14h15, nous démarrons pour nous rendre à Martigny: voyage en car le long de la vallée du Rhône commenté et décrit par François Murisier: génial, c’est une façon idéale de mieux découvrir la région, sa géographie, son histoire et ses produits.

Fromathèque de Martigny: Pinot Blanc, Petite Arvine et Syrah accompagnent un plateau de fromage (brebis et chèvre) et un plateau de charcuterie: bien que très bon, c’est un peu au dessus de nos forces: le déjeuner était trop bon, il est trop près et nous avons pêché par gourmandise!

Je m’interroge un peu sur cette fromathèque et sur le choix de ses propriétaires de faire des fromages de brebis dans la région de la raclette …

16h, montée en car jusqu’au lac de Champex commentée par François, nous traversons Orsières, son village d’origine: le paysage est vertigineusement beau et le commentaire à la hauteur!

Visite du Jardin Alpin de Champex: ce lieu est véritablement un trésor tant par sa localisation: les points de vue sur le lac sont vraiment magnifiques; que par son contenu: près de 4000 variétés de plantes provenant en partie des alpes mais aussi d’ailleurs, ont été acclimatées par plusieurs générations de jardiniers au sein de minuscules biotopes reproduisant leur environnement d’origine. Le tout est vraiment très esthétique ! 

Champex - Lac

Champex – Lac

19h, dîner au bord du lac au restaurant Le Club Alpin: entrée toast aux girolles, plat de poissons du lac façon meunière et une tartelette aux myrtilles accompagnée d’un expresso.

Vins dégustés : Blanc de Mer (Chardonnay 70%, Amigne 30%), Dôle (Pinot Noir, Gamay). J’ai bien aimé le Dôle, on y retrouve les arômes du Gamay associés à la richesse et la structure du Pinot Noir.

22h, retour en car, François nous raconte des blagues pour faire passer plus facilement la partie de la route étroite, perchée et très sinueuse: il a un franc succès bien mérité !

23h, arrivée à l’hôtel, je complète mes notes commencées au retour dans le car …  et puis dodo!

 

Dimanche 17 août 2014 – Jour 4

 

Le lendemain matin, je me sens frais et reposé : douche, petit déjeuner et malheureusement, il faut aussi boucler les valises car le concours s’achève ce midi; une pointé de nostalgie s’installe déjà.

9h, la dégustation recommence, notre souhait se réalise : cette fois-ci nous n’aurons pas d’effervescents!

14 Pinots Gris succèdent à 27 Pinot Noirs Rouges: notre jury est content et impatient de s’essayer à ce nouvel exercice; nous ne serons pas déçus!

4 magnifiques médailles d’or seront décernées au 4 plus sucrés des Pinots Gris de la série, dont une, qui m’a marquée, pour ce qui se révélera être un “Eiswein” allemand somptueux!

Les Pinots Noirs rouges sont inégaux, mais de belles surprises seront quand même au rendez-vous; nous attribueront 3 médailles d’or à l’Allemagne, aux Etats Unis (Oregon) et à la Nouvelle Zélande (Marlborough).

12h30, nous sortons de la salle de dégustation, bon derniers et très heureux de ces 3 jours passés à déguster et noter des vins ensembles! Pour moi, l’expérience est fantastique et restera inoubliable!

Un déjeuner debout nous attend dans les jardins de l’Hôtel de Ville: comme d’habitude la nourriture est très bonne et en particulier les farcis (Champignons bœuf, Courgette oignons, Tomates porc).

Pour le vin, j’opte pour un Cornalin Rouge: il est à la hauteur de mes attentes, fruité avec un boisé agréable, il accompagne très bien les farcis à la viande et les fromages qui suivront.

Et puis c’est ‘heure de partir, de faire ses adieux à toutes ces nouvelles connaissances et de remercier du fond du cœur Elisabeth et François !

A l’année prochaine je l’espère!

 

Notes:

Jury numéro 3 :

  • Dominique Moncomble (secrétaire) – France
  • Felipe de Solminihac – Chili
  • Edita Durcova – Slovaquie
  • Christian Guyot – Suisse
  • Nicolas Vahlas Grèce
Jury 3

Jury 3

Categories: Wines Tags: , ,

JSON Schema: first Java implementation available!

May 17, 2010 19 comments

Java source code available on gitorious

Yesterday night, I published a first version of the source code on Gitorious. It is released under the Apache V2.0 License.

This implementation covers nearly all of the “Core Schema Definition” corresponding to the paragraph 5 of the specification.  The “missing” items (mainly, 5.21, 5.22 and 5.25) concern points of the specifications that need to be clarified in order to be implemented.

Concerning the implementation itself, the main design ideas are the following:

  • Each validator should be a small stateless and easy to test object, implementing one and only one of the rules of the specification.
  • A schema object should be a “validating engine”,  containing a graph of validator objects built on construction.
  • Once loaded a schema object should be reusable in order to validate as many JSON instances as needed.

As you may already have guessed, a “wide” set of JUnit test cases is provided with the source code. Each test case, allows to test one of the “validators” separately using very simple JSON schemas and instances. There is also a more “complicated” and “complete” test case allowing to test combinations of validators.

Finally, some more work has to be done on the Java Documentation … I will cope with it during the following days and push it to the central repository.

Usage

The following few lines of code, show you how you can use the implementation in order to validate an JSON instance against a JSON schema:

		// Jackson parsing API: the ObjectMapper can be provided
		// and configured differently depending on the application
		ObjectMapper mapper = new ObjectMapper();

		// Allows to retrieve a JSONSchema object on various sources
		// supported by the ObjectMapper provided
		JSONSchemaProvider schemaProvider = new JacksonSchemaProvider(mapper);

		// Retrieves a JSON Schema object based on a file
		InputStream schemaIS = new FileInputStream("schema.json");
		JSONSchema schema = schemaProvider.getSchema(schemaIS);

		// Validates a JSON Instance object stored in a file
		InputStream instanceIS = new FileInputStream("instance1.json");
		List<String> errors = schema.validate(instanceIS);

		// Display the eventual errors
		for ( String s : errors ) {
			System.out.println(s);
		}

The project should be easy to build with Maven: a “pom.xml” file is provided with the source code. A simple “mvn package” should be enough to build the code, run the tests, produce the javadoc and the jar file.

I have also made some JAR’s available for those, who do not wish to build the JSON Schema validator from the source code:

  • The binary archive is available here
  • The javadoc archive is available here

Plans for the near future …

I will post on the Jackson project’s mailing lists in order to get some feedback from them: I would be very happy and proud to see this code tightly integrated inside the Jackson project!

I will also ask for the needed precisions concerning the “missing” points of this implementation to the people in charge of the specification: I would love to have 100% of the specification implemented. More generally, I have some questions concerning the possibility to reference / reuse existing JSON Schemas: the Core Schema Definition seems to allow only “anonymous” types. In a complex schema, the possibility to define and reuse “named” types (like in XML Schema) would be very handy.

At this very early stage, any help will be welcome: testing, using, fixing, extending … there is still some work to be done before the first release. I plan to use this implementation as it is on a project in the very near future … I will of course publish any fix, extension, documentation. For example, I will make a Google Guice module in the context of this project in order to avoid all the “boiler plate” instantiation code that you can see in my example (Google Guice is my preferred choice when it comes to DI ;-) ).

Implementation Matrix: Paragraph 5 – Core Schema Definition

$ Title Status
5.1 type simple: OK / union: OK
5.2 properties OK
5.3 items simple: OK / tuple: OK
5.4 optional OK
5.5 additionalProperties OK
5.6 requires name: OK / schema: OK
5.7 minimum OK
5.8 maximum OK
5.9 minimumCanEqual OK
5.10 maximumCanEqual OK
5.11 minItems OK
5.12 maxItems OK
5.13 uniqueItems OK
5.14 pattern OK
5.15 maxLength OK
5.16 minLength OK
5.17 enum OK
5.18 title NOTHING
5.19 description NOTHING
5.20 format TODO (OPTIONAL)
5.21 contentEncoding TODO
5.22 default TODO
5.23 divisibleBy OK
5.24 disallow OK
5.25 extends TODO

JSON Schema: specifying and validating JSON data structures

April 23, 2010 18 comments

Introduction

From my own experience, I can mainly see 2 major reasons why someone would need a JSON Schema language:

  1. Specify JSON data structures: this is particularly useful when exposing JSON based web services to a wide audience and documenting them.
  2. Validate JSON data structures.

Recently, I started designing / writing JSON based web services on the Java platform for the Europass project. The purpose of these services was to allow external web applications to easily integrate the Europass CV online generation services to their web applications. I was really surprised to see that there was no standard way to describe the structure of the JSON objects expected and as consequence no standard way to validate the incoming objects.

Some googling around and I rapidly landed on the draft JSON Schema specification:

This draft specification helped me a lot for my project as it allowed me to write my schemas in a mature and ready-to-use syntax … instead of inventing one of my own. However, I rapidly realised that there was no Java implementation of this specification. Even the very good Jackson JSON Processor project that I decided to use for the processing of the JSON streams had “only” implemented the generation of a JSON Schema starting from a JSON Object.

This is the reason why I decided to write an implementation of my own and share it with all the people potentially interested in it. Moreover, I decided to write it at home, in my free time in order to be able to distribute it under an open license … but because of this, I cannot guarantee that the development will go very fast ;-)

The JSON Schema specification

Now, let’s go back to the specification itself. It has several parts but the first one I am going to work on will be the “core” specification corresponding to the paragraph 5 of the text. The first thing to tell is that JSON Schema is to JSON what XSD is to XML:

  • JSON Schema is self descriptive and you can write a JSON Schema describing the syntax of JSON Schema.
  • JSON Schemas are written in JSON

The writer of the specification and owner of the corresponding Google Group (Kris Zyp) has written an implementation of the “core” specification in Javascript: it is a very good starting point for anyone willing to implement the specification or simply test it. I have written a simple HTML page allowing to test a JSON instance against a JSON schema using this Javascript implementation. I have used it myself to write this post and to start the Java implementation of the specification.

In the following paragraphs, I briefly present the core JSON Schema language through some examples. My objective is to show that it is quite easy to write and expressive enough for most cases.

The basic types

The core specification defines the 8 following type: Object, Array, String, Number, Integer, Boolean, Null and Any. Two of them are contained types (Object and Array), five are atomic types and one (Any) is very convenient :-) its existence is also tightly related to the dynamic nature of the Javascript language. By the way, the JSON home and a good source for a lot of resource and information on JSON is: http://www.json.org/.

In order to specify the “type” of a JSON object as described by the specficiation, you should write a JSON object with a property named “type” containing one of the following 8 strings: object, array, string, number, integer, boolean, null or any.

In this post, I use 2 examples on the 2 container type which help me cover a lot of the simple types too.

Specifying an object

First the schema …
{
  "description" : "Example Address JSON Schema",
  "type" : "object",
  "properties" : {
    "address" : {
      "title": "Street name and number",
      "type" : "string"
    },
    "city" : {
      "title" : "City name",
      "type" : "string"
    },
    "postalCode" : {
      "title" : "Zip Code: 2 letters dash five digits",
      "type" : "string",
      "pattern" : "^[A-Z]{2}-[0-9]{5}$"
    },
    "region" : {
      "title" : "Optional Region name",
      "type" : "string",
      "optional" : true
    },
    "country" : {
      "title" : "Country name",
      "type" : "string"
    }
  },
  "additionalProperties" : false
}
… and some explanations

As you can see specifying an “object” in JSON Schema is as simple as having the 2 following properties in a JSON object:

  • a “type” property with a string value set to “object”
  • a “properties” property containing an object, whose properties are named after the properties of the object described and contain a JSON Schema describing them.

You may have notice the “additionalProperties” property, which is set to false. This property is used to specify whether additional properties are allowed or not. In the former case, the “additionalProperties” property must contain a JSON Schema and in the later,  it must be set to false. In our Address JSON schema, we do not allow any other properties than those described in the schema.

Note that the “title” and “description” properties are for general usage (and optional). They allow to document the JSON Schema.

If we dive a little bit deeply into the example schema, we can see that each property of the Address schema (address, city, postalCode, region, country) is in turn a JSON Schema with a “type” property taking one of the 8 types allowed by the core specification and some more properties allowing to define in greater detail the usage of the property.

Some properties you can find in our Address schema are:

  • the “optional” property, which allows to specify whether a property is required or not for the object to be valid. In our example, an Address object is valid even if it does not contains information on the region
  • the “pattern” property, which allows to set a regular expression on string properties. In our example, a valid postal code is composed of 2 capital letters followed by a dash and 4 digits.
Second the instance …

Follows a JSON Address instance complying with this JSON Schema:

{
  "address" : "Μέγαλου Σπηλαίου 4",
  "city" : "Athens",
  "postalCode" : "GR-15125",
  "country" : "Greece"
}

Specifying an array

First the schema …
{
  "description" : "Example Contact Information Array JSON Schema",
  "type" : "array",
  "items" : {
    "title" : "A Contact Information object",
    "type" : "object",
    "properties" : {
      "name" : {
        "type" : "string",
        "enum" : ["home", "work", "other"]
      },
      "phone" : {
        "type" : "string",
        "optional" : true,
        "format" : "phone"
      },
      "mobile" : {
        "type" : "string",
        "optional" : true,
        "format" : "phone"
      },
      "email" : {
        "type" : "string",
        "optional" : true,
        "format" : "email"
      }
    },
    "minItems" : 1,
    "maxItems" : 5
  }
}
… and some explanations

As you can see specifying an “array” in JSON Schema is as simple as having the 2 following properties in a JSON object:

  • a “type” property with a string value set to “array”
  • an “items” property containing a JSON Schema allowing to validate each element of the array. Please note, that the “items” property may contain an array of JSON Schemas in order to validate each element of the array against a different schema: this is called tuple-validation.

You may have notice that the JSON Schema specifying the “items” of the array contains some properties specific to arrays:

  • a “minItems” property specifying the minimum number of elements the array should contain in order to be valid. In our example, a Contact Information array should contain at least 1 contact object in order to be valid.
  • a “maxItems” property specifying the maximum number of elements the array can contain in order to be valid. In our example, a Contact Information array can contain up to 5 contacts objects in order to be valid.

Our Contact Information Array Schema, is an array of objects. Each object is a Contact Information Object composed of 4 properties: “name”, “phone”, “mobile” and “email”. There are some things that are worth noting in the JSON Schemas defining each of these properties:

  • the “enum” property allows to specify a closed list of allowed values for a property. In our example, a Contact Information Object can be a “home”, “work” or “other” type of contact.
  • the “format” property allows to specify a valid format on properties using a predefined (and extensible) set of supported formats. In our example, “phone” and “mobile” properties have a “phone” format, where the “email” property has an “email” format. Please note that it is said in the specification that implementions are not obliged to support all the formats listed in the specification.
Second the instance …
[
  { "name" : "home", "phone" : "+302109349764", "email": "nico@vahlas.eu" },
  { "name" : "work", "phone" : "+302108029409", "email": "nvah@instore.gr" }
]

Implementing the JSON Schema specification in Java

As I mentioned in the introduction of this post, I have decided to write a Java based implementation of the specification allowing to validate JSON strings against JSON Schemas. When I started my initial intention was to “simply” port the Javascript implementation of the “core” specification … but when I dived into the code my opinion changed and I decided to write something more “Java-like” if I may say so.

Infrastructure

Roughly, the infrastructure I use for this little project is the following:

  • Jackson as the JSON processor library: I have used it for the Europass project recently and liked it; it’s the more complete Java-based JSON processor I have found so far
  • GIT as the code versioning system: I love the idea of distributed versioning systems and I wanted to try Git for a long time.
  • Gitorious to host the project: JSON Schema Validation in Java, the public code repository, the wiki and project page.
  • Maven and Eclipse for the development tools and infrastructure with the EGit plugin

Status

Roughly, the status / progress of my work at the time of this writing is the following:

  • I have started the implementation but I am not far enough to make it publicly available
  • I have started a discussion on the JSON Schema Google Group and have had feedback from the lead developer of the Jackson project: seems there is a lot of interest in the thing
  • I have created a project and a repository on Gitorious
  • I am writing this post :-)

It’s a lot of work to do all this setup … I had not realised that it would take me so much time. I hope it will be worth it. Voilà !

Some thoughts on stress testing web applications with JMeter (part 2)

March 30, 2010 9 comments

In this second part on testing web applications with JMeter, I will mainly write about running the test plans, recording the results and interpreting them.

When do I stop ?

One of the main questions you have to ask yourself when you start stress testing a web application is: when do I stop? This question is not as easy a question as it seems, the response depends on your initial objectives and on “scientific” criteria allowing you to decide when you have met the initial objectives. Eventually, it comes down to measuring and interpreting the “results” of your stress tests.

Before going any further, we should spend some time on the measurable outcomes of a stress test. There are mainly 2 interesting measures that you can record when you run a stress test on a web application:

  • The throughput: is the number of requests per unit of time (seconds, minutes, hours) that are sent to your server during the test.
  • The response time: is the elapsed time from the moment when a given request is sent to the server until the moment when the last bit of information has returned to the client

The throughput is the real load processed by your server during a run but it does not tell you anything about the performance of your server during this same run. This is the reason why you need both measures in order to get a real idea about your server’s performance during a run. The response time tells you how fast your server is handling a given load.

We are now much closer to find an answer to our initial question: you can stop stress testing your application when for a measured throughput the measured response time is “too high”. This is the right answer in an ideal world where information systems behave in a deterministic manner … another way to answer our question could also be: you can stop stress testing your application when your system crashes / collapses / starts to behave unexpectedly :-)

However, I will stick to our first answer for a while as it contains another interesting question: what is a “high” response time for a web application (or any application or information system used by real people)? A very interesting answer is given in the article already mentioned in my previous post and in this one as well. To make it short, based on usability studies it is possible to define response time limits where the user interaction with an information system radically changes. These limits are tightly related with the nature of the human being: psychology as well as brain performance :-)

  • 0.1 second is about the limit for having the user feel that the system is reacting instantaneously, meaning that no special feedback is necessary except to display the result.
  • 1.0 second is about the limit for the user’s flow of thought to stay uninterrupted, even though the user will notice the delay. Normally, no special feedback is necessary during delays of more than 0.1 but less than 1.0 second, but the user does lose the feeling of operating directly on the data.
  • 10 seconds is about the limit for keeping the user’s attention focused on the dialogue. For longer delays, users will want to perform other tasks while waiting for the computer to finish, so they should be given feedback indicating when the computer expects to be done. Feedback during the delay is especially important if the response time is likely to be highly variable, since users will then not know what to expect.

Using these limits allows us to give a precise end point to the stress tests of a system; it helps us define in collaboration with our client (or users) what is an acceptable response time. For example, the last time I made stress tests for a client, we agreed that the acceptable upper limit of the response times for his system was 7 seconds: he wanted to know how many concurrent users his system would handle.

The remaining problem now is how to measure / estimate the throughput and response times of our system using JMeter: some simple statistics and mathematics are needed here.

Run your test plan and record the meaningful measures …

First of all, JMeter provides us with several different “listeners” allowing to record these 2 variables in various ways (graphics, tables, trees, files). I would say that most of these “listeners” are useless or to put it in a different way, one of them is a must have in order to do have all the necessary information in hand: the Summary Report.

In order to understand this report and to implement scenarios efficiently we must keep the following things in mind:

  • JMeter records response times and throughput for each “sampler” of each “thread group” defined in your test plan.
  • In the Summary Report, one line is displayed for each different “sampler” based on the sampler’s names: you can group  or differentiate samplers in the report just by playing with their names.
  • Each “sampler” is executed  many times: the Summary Report provides us with mean values (and standard deviations) for the throughput and response times of each named “sampler”.
  • Global values (mean and standard deviation) for throughput and response times are also calculated in the Summary Report.
  • The Summary Report allows you to store the measures of each run in a “csv” file: you can thus analyse and interpret the results in a spreadsheet program.

Other reports are also useful particularly at the beginning when building and testing your scenarios:

  • The View Results Tree is very handy when “debugging” a scenario as it allows to monitor all the HTTP Requests and Responses exchanged with the server. The draw back is that it consumes too much memory to be used in a large stress test.
  • The View Results in Table listener is also useful in the early stages of the stress test implementation as it gives a good and fast overview of the execution of a test plan. However, this listener also consumes too much memory to be used in a large stress test.
  • I have also found some very interesting JMeter plugins on a Google Code project. One of them, the “Active Threads Over Time” helped me a lot when trying to set the ramp up throughput by playing with the “ramp up” and “number of threads” parameters of the thread group.

One more element that you should have in mind when performing stress tests is the performance bottleneck of the computer running the tests themselves:

  • It is very common when running stress tests on large production systems to reach the limits of the computer running the tests before reaching the limits of the tested server.
  • When the computer running the tests is reaching its limits (memory, number of threads, cpu …) all the measures recorded by the stress tests tool are wrong or at least biased.
  • There are two way to face this problem: (1) one is to optimize your scenarios and the way you run them and the (2) second is to set up a distributed infrastructure.

(1) In the JMeter manual, you will find the following advises in the section 16.6 of the Best Practises page:

Some suggestions on reducing resource usage.

  • Use non-GUI mode: jmeter -n -t test.jmx -l test.jtl
  • Use as few Listeners as possible; if using the -l flag as above they can all be deleted or disabled.
  • Rather than using lots of similar samplers, use the same sampler in a loop, and use variables (CSV Data Set) to vary the sample.
  • Don’t use functional mode
  • Use CSV output rather than XML
  • Only save the data that you need
  • Use as few Assertions as possible

If your test needs large amounts of data – particularly if it needs to be randomised – create the test data in a file that can be read with CSV Dataset. This avoids wasting resources at run-time.

(2) In the JMeter manual, you will find the Remote Testing page giving you precise instructions necessary to set up a distributed testing environment and a PDF describing how it all works architecture-wise. My experience is that it is all very easy to set up and that it gives excellent results: in the end, it comes down to running the “jmeter-server” scripts on the slaves and to configure the existing host in the master’s configuration file (jmeter.properties).  The only 2 or 3 little problems I came across with the distributed testing are:

  • Do not forget to give memory to your jmeter slaves and master (set Xms and Xmx in the jmeter.properties file) the default values a very low.
  • If you use external resources such as a CSV Data Set, you should have them on all your slave installation under the same location (a full path is needed in your scenario)
  • Beware of multiple thread groups and schedulers, they leak huge amounts of memory on the slaves

Last but not least, you should never perform your stress tests against a server or infrastructure that was just started. Servers usually need a warm-up before they reach their full speed: this is particularly true for the Java platform where you surely don’t want to measure class loading time, JSP compilation time or native compilation time.

Interpret the results …

In order to interpret the results of a stress tests, it is important to understand some basic elements of Statistics:

(1) The mean value (μ)

The following equation show how the mean value (μ) is calculated:

μ = 1/n * Σi=1…n xi

The mean value of a given measure is what is commonly referred to as the average value of this measure. An important thing to understand is that the mean value can be very misleading as it does not show you how close (or far) your values are from the average. An example is always better than a long explanation.

Let’s assume that we are measuring response times in milliseconds in 2 different stress tests:

Stress Test 1:

  • x1=100
  • x2=110
  • x3=90
  • x4=900
  • x5=890
  • x6=910

gives you μ = 1/6 * (100 + 110 + 90 + 900 + 890 + 910) = 500 ms

Stress Test 2:

  • x1=490
  • x2=510
  • x3=535
  • x4=465
  • x5=590
  • x6=410

gives you μ = 1/6 * (490 + 510 + 535 + 465 + 590 + 410) = 500 ms

In both cases the mean value (μ) is the same. However if you observe closely the values taken by the response times you will see that in the first case, the values are “far” from the mean value where in the second case, the values are “close” to the mean value. It is quite obvious with this example that a measure of this distance to the mean value is needed in order to draw any kind of conclusion based on the mean value.

(2) The standard deviation (σ)

The following equation show how the standard deviation (σ) is calculated:

σ = 1/n * √ Σi=1…n (xi-μ)2

The standard deviation (σ) measures the mean distance of the values to their average (μ). In other words it gives us a good idea of the dispersion or variability of the measures to their mean value. Let’s go back to our example and calculate the standard deviation of each of our theoretical stress tests:

Stress Test 1:

σ = 1/6 * sqrt( (100-500)^2 + (110-500)^2 + (90-500)^2 + (900-500)^2 + (890-500)^2 + (910-500)^2 ) ≈ 163 ms

Stress Test 2:

σ = 1/6 * sqrt( (490-500)^2 + (510-500)^2 + (535-500)^2 + (465-500)^2 + (590-500)^2 + (410-500)^2 ) ≈ 23 ms

The 2 values of the standard deviation calculated above are very different:

  • in the first case, the standard deviation is high compared to the mean value, which shows us that our measures are very variable (or mostly far from the mean value) and that the mean value is not very significant.
  • in the second case, the standard deviation is low compared to the mean value, which shows us that our measures are not dispersed (or mostly close to the mean value) and that the mean value is significant.

(3) The sampling size and the quality of the measure

Another interesting question is whether our calculated mean value is a good estimation of the “real” mean value. In other word, when calculating the mean value of the response time during a test case do we have a good estimation of the “real” mean response time of the same scenario repeated indefinitely. In probability theory, the Central Limit Theorem states conditions under which the mean of a sufficiently large number of independent random variables, each with finite mean and variance, will be approximately normally distributed.

The measures of response times and throughput obtained during stress tests comply with the Central Limit Theorem as we usually have: a large number of independent and random measures which have a finite (calculated by JMeter) mean value and standard deviation. We can thus assume that the mean values of the response time and the throughput are approximatively normally distributed.

This allow us to calculate a Confidence Interval for these mean values. The Confidence Interval gives us a measure of the quality of our mean values as it allows us to calculated the variability of our mean value (interval) with a predefined probability. You can for example decide to calculate your Confidence Interval at 95%, which will tell you that the probability to have a mean value within the calculated interval is 95%. On the contrary, you can decide to calculate the probability to have you mean value within a given interval (see the examples below).

The following equation show how the Confidence Interval (CI) is calculated:

CI = [μ – Z*σ/√n, μ + Z*σ/√n]

where:

  • μ is the calculated mean value of our sample,
  • σ is the calculated standard deviation of our sample
  • and Z is the value for which the area under the “bell shaped curve” of the standard normal distribution represents the half the chosen Confidence C (anyone who can explain this better is welcome).

The following table gives values of Z for various given values of Confidence C:

C Z
0.80 1.281551565545
0.90 1.644853626951
0.95 1.959963984540
0.98 2.326347874041
0.99 2.575829303549
0.995 2.807033768344
0.998 3.090232306168
0.999 3.290526731492
0.9999 3.890591886413
0.99999 4.417173413469

Source: http://en.wikipedia.org/wiki/Normal_distribution

If we go back to our previous examples, we can calculate the confidence intervals of our mean values at 95% :

CI1 = [500 – 1.96*163/sqrt(6); 500 + 1.96*163/sqrt(6)] ≈ [370; 630]

CI2 = [500 – 1.96*23/sqrt(6); 500 + 1.96*23/sqrt(6)] ≈ [482; 518]

This means that the probability to have a mean response time in the calculated confidence interval is 95%.

We can also calculate the probability to have the mean value in the interval [490, 510]:

10 = Z1 * 163 / sqrt(6) => Z1 = 10 * sqrt(6) / 163 => Z1 ≈ 0.15 => C1 ≈ 12%

10 = Z2 * 23 / sqrt(6) => Z2 = 10 * sqrt(6) / 23 => Z2 ≈ 1.06 => C2 ≈ 71%

Notes:

These are just given as examples of how to calculate the confidence interval … the conditions are not met for the Central Limit Theorem with such a small sample.

The last 2 examples were made using the following Standard Normal Distribution Tables.

Conclusion

As a conclusion, we can say that the best way to interpret our stress test results is to use the Summary Report provided by JMeter and to store it in a “csv” file for every run. In this report we can find, the mean response time, the mean throughput, the standard deviation of the response time and the standard deviation of the throughput for every named sampler and globally for a the run.

Based on the explanations above, I recommend the following methodology:

  • If we have a high number of samples (which is usually the case in stress tests) and a low standard deviation than we can  conclude without risk that we have a good estimation of the mean value of both the response time and the throughput of our system and that the “real” number will be close to the calculated mean values.
  • If we have a high number of samples (which is usually the case in stress tests) and a high standard deviation, we probably have a good estimation of the mean value but should however consider to  estimate a confidence interval. In any case, if the variability of the measure is high investigation is needed on a technical point of view as variability of response times and throughput is obviously related to instability of the system tested.
  • If we have a low number of samples and a high standard deviation than we almost certainly have a very bad estimation of the mean value, which means that we are measuring the wrong thing, the wrong way.

Monitor your systems while you run the tests …

It is often useful to monitor the system (and its various components) while you are stressing it. Various tools may be used that vary from one platform to another. On the Java platform you may use the excellent “jvisualvm” provided with the latest versions of the JDK and interacting with the various monitoring hooks integrated in the JVM.

Monitoring Java Web Applications is a subject in itself … I can try to share my thoughts on it some time … in another post ;-)

Some thoughts on stress testing web applications with JMeter (Part 1)

March 17, 2010 2 comments

A small intro …

Now that I am almost finished with the “stress test” task I was talking about in my previous post, I have several thoughts and experience to share concerning on the subject. I am also planing to write about Java web application profiling on a following post as it somehow relates with the results of a “stress test” task.

The tool I have used to carry on stress test tasks is JMeter (the latest version available at the time of this writing) thus, I will write about JMeter. However, I am interested in any feedback (experience) concerning other tools (or JMeter).

State clearly your objectives …

It is important that you state your objectives clearly as the overall methodology of the stress tests will greatly depend on these objectives.

Some classical examples follow:

  • Give a precise estimate of the maximum load that a given system may serve (peak): this is usually done in order to help plan the future infrastructure of a live system.
  • Find precisely the bottlenecks of a live system during a peak: this is usually done as a preliminary task to profiling and performance tuning tasks.
  • Find precisely the origin of eventual leaks (memory, connection to resources, various resources) during a long run: this is also usually done as a preliminary task to profiling and tuning tasks.
  • Prove that the system you have implemented can hold a theoretical load: usually this was a client’s requirement expressed during the very early stages of a project (for example in the call for tender)
  • Any combination of the aforementioned objectives …

These various different objectives lead to different types of scenarios. To my opinion a good methodology is always to try and implement scenarios that are as close as possible to real and typical use cases of the system you are willing to test. However, in some cases (bullets 2 and 3 above) you may need to write artificial scenarios that will help you identify precisely a functionality of your system that has performance problems.

The following paragraph is about writing “real case” scenarios and test plans covering the aforementioned objectives.

Write good quality scenarios and test plans …

First a difference must be made between “scenarios” on  one hand and “test plans” on the other:

A scenario is (or at least should be) an actual use case of your application carried out by a single user. In JMeter terms, a scenarios is a combination of “samplers” and “controllers” that will be executed by a single “thread” of a “thread group”.

A test plan is the “way” a given scenario will be executed in order to achieve a given objective (as the ones described in the previous paragraph). In JMeter terms, the “way” the scenario will be executed mainly means playing with the following variables on the thread group: the number of threads, the ramp up time and the number of loops executed by a thread.

It is very important to understand the exact meaning of these 3 parameters:

  • The “number of threads” in a thread group is the actual number of threads spawned by JMeter, each one of them used to execute the scenario. In other words, this variable is the number of users executing a “real life” use case on your system. This number is not the number of concurrent / parallel users executing a “real life” use case on your system: the concurrency of the users depends on both the duration of your scenario and the ramp up time configured on the thread group.
  • The “ramp up time” in a thread group is the actual time taken by JMeter to spawn all the threads. If the ramp up time is small compared to the number of threads and the mean duration of a scenario then the number of concurrent threads accessing your system will be high and vice versa. A rough estimation of the throughput (number of requests per second) during the ramp up period of your test plan is: number of threads / ramp up time (in seconds).
  • The “number of loops” in a thread group is the actual number of times that the scenario will be executed by each thread.

Now let’s go back to the implementation of “real case” scenarios using JMeter. I recommend this interesting article on the subject sent to me by a colleague (thanks Petros ;-) ). Some very good methodological hints are given concerning the writing of scenarios in the first paragraphs. Basically, I can give 3 main hints on the subject that are easy to follow and implement with JMeter:

  • Keep scenarios simple:
    Each scenario should correspond to one use case. This makes things much more simple and logical particularly when it comes to interpreting the results of the stress tests.
  • Use “recording” techniques to generate your scenario from a “real” usage of the application:
    JMeter comes with a proxy component, which when started, will record all the HTTP Requests and Response cycles originating from a web browser configured to access your system through this proxy. There are well-known problems with the usage of this proxy when dealing with HTTPS: often, a simple solution is to do all the recording in HTTP and turn the protocol to HTTPS in your scenario afterwards (this supposes that you can make your system run under HTTP for the time of the recording).
  • Don’t forget to record the “think time” of the users:
    The “think time” of a user is the elapsed time between 2 user actions. During this time, the user may be thinking what to do next, answering an urgent call on the phone, talking with a friend … this must be part of the scenario. Fortunately, JMeter allows to record these “think times” and translate them into “Gaussian Waits” inside your scenario (see the article mentioned above for hints on how to do it). In any case, you should always have “waits” in your scenarios simulating in the most realistic manner these “think times” of the real users.
  • Read the JMeter User’s Manual particularly the “Component Reference” in order to find all possibilities provided by the tool. For example:
    You can use an external csv file containing (username, password) couples in order to have each thread login into your system with different credentials.
    You can use regular expressions to parse HTTP Responses and extract data necessary to chain your samplers

Once you have your scenario ready, you must configure your test plan in order to meet your objectives. The tuning of the main parameters of your test plan (number of threads, ramp up and number of loops) is often a “try and error” procedure. However, we can give the 3 following hints:

  • You should try to have a constant throughput during a run:
    It is often very difficult to “control” the throughput particularly during the ramp up period
  • If your objective is to simulate a “peak”:
    You should have a “high” number of threads and a “low” ramp up time and number of loops
  • If your objective is to simulate a “long run”:
    You should have a “medium” number of threads, a “higher” ramp up time and a “high” number of loops

Note: The terms “high”, “higher”, “medium” and “low” are voluntary qualitative in the 3 bullets above as they depend on the system you are testing.

To be continued …

This post is already too long: seems I have to much to say on the subject ;-) Never mind, I will carry on in a following post tomorrow covering the remaining subjects: running the test plans,recording the meaningful measures, interpreting the results, monitoring the systems …

JVM Monitoring with Oracle Application Server 10g R2

March 2, 2010 Leave a comment

A little introduction

I was recently asked to perform some stress tests on a system running Oracle Application Server 10g Release 2 installed on Windows 2003 server. Among other things, the objective was to monitor the system and profile the code in order to detect possible flaws in the code and the server configuration.

One of tasks I had to do was to find a way to monitor the application server’s JVM during the stress tests. Naively, I thought that I could easily use “visualgc” (vmstat 3.0) or even better the “jvisualvm” provided with all the latest releases of the JDK. The rest of this post shows how wrong (and ignorant) I was …

First thing to do: install a decent JVM

As you may already know, only “recent” versions of the JDK are bundled with monitoring tools (jps, jstat, jstad, jvisualvm …) and unfortunately Oracle Application Server 10g R2 is not bundled with something that can be called a “recent” JVM … JDK 1.4.2

However, this is no real problem as you can monitor an older JVM with the tools provided in a recent one: more precisely you can monitor any JVM with a version number greater or equal to 1.4.1 (see jvmstat doc). Basically, you just need to:

  • download and install the latest available JDK (for example 1.6.0_18): jps, jstat and jstatd are included starting with jdk 1.5
  • download and install jvmstat 3.0 if you wish to have the “visualgc” tool and documentation for all the monitoring tools in one bundle.

Once you have done this you can try to run jps on your Windows 2003 Server where you have your Oracle Application Server 10g R2 installed … and … no, it won’t show you any of the JVM’s of the platform :-)

Still, you can test that everything works as supposed by writing a simple test class such as the following one and running it with the JDK bundled in the Oracle Application Server:

public class Test {
  public static void main(String[] args) throws Exception {
    while (true) {
      System.out.println(".");
      Thread.currentThread().sleep(10000l);
    }
  }
}

Once you have run it, this class should output a dot every 10 seconds in your console. If you run jps from another console, you should see a Java process corresponding to your running test class listed in the output produced by the jps tool. This should be enough to reassure you and prove that the jps from a JDK 1.6 can monitor Java processes originating from a JDK 1.4.2 ;-)

As a matter of fact the main reason why you don’t “see” the Java processes of your Oracle Application Server listed in the output produced by the jps tool is that they are run by very different OS users. This user / permission issue is documented in each of the monitoring tools: for example for jps see towards the end of the “Description” section.

Second thing to do: run the tools with the proper user

Oracle Application Server is installed as a Windows Service and as such all its processes are owned and executed by the Local System User.

When you run the jps tool (or any other monitoring tool provided with your freshly installed JDK 1.6), the user owning and executing the monitoring processes is the one you used to log into your Windows Server. There are several ways to run a command prompt as the same “Local System User” that runs the Windows Services, 2 of them are documented here. I chose to use the the psexec tool from Sysinternals:

psexec -i -s cmd.exe

Once you have a command prompt owned by the “Local System User” all the processes run from there inherit this user. If you run jps from within this command you will find more Java processes listed in the output of the tool … but … once again no luck, the Java processes corresponding to the “OC4J Homes” running your web applications are not there :-)

Third thing to do: set the temporary directory of the JVM’s

That is the most tricky part of the procedure and finding a solution involved a “deep diving” into the source code of the jps tool and jvmstat classes.

As far as I have understood, starting from JDK 1.4.1 all the JVM’s can produce real time performance metrics in files. These files are located in the temporary directory of the user running the Java process under a folder named “hsperfdata_<user>”. On the Windows platform, for a Java process spawned by a Windows service (and thus owned by the “Local System User”), its performance file is located under c:\WINDOWS\Temp\hsperfdata_SYSTEM and is named after the id of the process.

When you run jps (or any other monitoring tool), the OS user running the command is used to determine the directory where the performance files should be found (based on the user’s temporary directory). For example, jps will return an entry for each file present in this directory.

However, when a Java process corresponding to an “OC4J Home” is spawned by OPMN, the location of the temporary directory is overridden through an environment variable and points to the temporary directory of the user who installed the server (some Administrator user).

The problem is that:

  • on one hand, you have to run jps as the “Local System User” in order to have the sufficient privileges to monitor the Java processes of the Application Server (because it is started as a service).
  • on the other hand the performance files are not located under the temporary directory of this same “Local System User”

The solution is to override the location of the temporary directory of the “OC4J Home”. Fortunately, this is easy using the Enterprise Manager console: in the “Administration” tab of every “OC4J Home”, there is a “Server Properties” link that opens a web form where you can find an “Environment” section. In this section, you just need to add an environment variable named “TEMP” with a value set to “c:\WINDOWS\Temp”.

Once this is done and your “OC4J Home” is restarted, your jps tool run as the “Local System User” will return (among others) the Java processes corresponding to your “OC4J Homes”. Moreover, under the directory “c:\WINDOWS\Temp\hsperfdata_SYSTEM” a new file will appear for each of these Java processes.

Fourth thing to do: the final monitoring architecture

As I have 3 Oracle Application Servers 10g installed on 3 different servers, my initial idea was to able to monitor them all from a remote PC using the “jstatd” tool on the servers and the “visualgc” or “jvisualvm” tools on the PC.

Running jstatd on a server is not very different from running jps. It has to be run as the “Local System User” with a policy file allowing the embedded RMI Server to be started (see jstatd documentation for more details):

grant codebase "file:${java.home}/../lib/tools.jar" {
   permission java.security.AllPermission;
};
jstatd -J-Djava.security.policy=jstatd.all.policy

In order to check that everything is ok, the jps tool can be used from a remote pc passing the host name or IP of the server.

jps -l <server_host_name_or_ip>

An even better way to do this and to have it automated is to install jstatd as a Windows Service as:

  • it will run with the needed user (Local System User)
  • it can be started automatically

Instructions on how to do install jstatd as a Windows Service can be found here. A brief summary follows:

  1. Get the instsrv.exe and srvany.exe tools for example from the Windows Server 2003 Resource Kit Tools.
  2. Run the following command to install the service:
    c:\<location>\instsrv.exe jstatd c:\<location>\srvany.exe
    
  3. Use a Windows Registry editor to create a key named “Parameters” under HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\jstatd. Then inside the Parameters key create a new String value (REG_SZ) named Application containing the full path to jstatd.exe and the security parameter (policy file).
  4. Use the Windows Services management application to check out that the jstatd service is configured to run as the local system user.
  5. Start the jstatd service.

Once jstatd is installed as a Windows Service on every server, the “jvisualvm” tool can be used to connect to these servers (Remote Host) and monitor their instrumented JVM’s. The “visualgc” tool can either be embedded as a plug-in of the “jvisualvm” tool or be run independently against the various instrumented JVM’s on the various servers where jstatd is running.

une matinée cerf-volant

February 15, 2010 3 comments

Les faits selon Dimous …

On a fait une journée de cerf-volant! Pendant que je tenais le cerf-volant, mon papa m’a jeté en l’air pour voir si je peux m’envoler et le cerf-volant m’a emporté! Mon papa, il m’a vite rattrapé: “Ouf! je t’ai rattrapé!”
(Ce texte est de Dimitri)

Sinon, nous avons aussi quelques photos … qui montrent qu’avec du vent et un cerf-volant de traction petits et grands peuvent bien s’amuser.

Dimous fait du cerf-volant

Dimous fait du cerf-volant

Ca c’est Dimitri … et ce qu’il a raconté plus haut est vrai … il y avait tellement de vent qu’il tenait à peine au sol et que quand j’ai essayé de le jeter en l’air (en l’attrapant sous les bras)  il s’est effectivement envolé et j’ai du vite sauter en l’air pour le rattraper! Dommage qu’on n’ait pas de photo de cet évènement!

Lucachon fait du cerf-volant

Lucachon fait du cerf-volant

Lucas aussi a fait du cerf-volant … mais là j’ai pas trop essayé de le faire voler :-)

Taniou fait du cerf-volant

Taniou fait du cerf-volant

Et ça, c’est Tatiana … elle aussi tenait à peine au sol!

Et quelques vidéos … pour ceux qui veulent plus de détails et qui voudraient aussi voir le cerf-volant

Quelques explications …

Aujourd’hui c’est le “Lundi Propre” en Grèce … ne me demandez pas ce que c’est exactement … l’important est que c’est un jour férié, qu’il a fait très beau avec du vent et que selon la tradition grecque, ce jour est dédié aux cerf-volants.  Nous avons donc ressorti le super cerf-volant de traction que Tatiana m’avait offert il y a bien longtemps … vous savez un de ces cerf-volants qu’on utilise aussi pour le kyte surf mais en plus petit … et on s’est bien amusés.

Categories: Personal Tags: ,
Follow

Get every new post delivered to your Inbox.

Join 74 other followers