Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

 We use the tulip database to generate our sites.xml which is used in probing the landmarks. We added pingER nodes from the Nodedetails database to the tulip database but with some defined rules. We only added those nodes which contain a traceroute server. To implement this, we developed three packages. Many of the scripts are executed automatically from trscrontab running under pinger@pinger.slac.stanford.edu  (see secion section below).

  •  TULIP/ANALYSIS/NODEDETAILNODES.pm
  •  insert_sites-xml.pl
  •  create_sites-xml.pl

We also add nodes from PlanetLab.

TULIP/ANALYSIS/NODEDETAILNODES.pm

It is found at /afs/slac/package/pinger/tulip/TULIP/ANALYSIS/NODEDETAILNODES.pm. In order to build this module, we used some predefined perl modules and scripts (require '/afs/slac/package/pinger/nodes.cf' and require Text::CSV_XS). To get data from node details the PingER Nodedetails database we used require '/afs/slac/package/netmon/pinger/nodes.cf' and we used standard package Text::CSV_XS to convert our data in comma-separated values. In order to define our node to be a candidate for a tulip landmark we tested it for a few conditions which include that the node must have traceroute server, it should not be set to NOT-SET and it's its project type should not be set to "D" which means deleted disabled as per nodedetails Nodedetails database semantics. Nodes which qualified these conditions were put into a separate array. This array is used by insert_sites-xml.pl to insert these sites into the Tulip database

...

This perl script is used to create the insert query from the data of the above nodes. Again using the perl package theperlpackage Text::CSV_XS we XSwe divide our data into separate chunks. The data is then fed to the structure which contains parameters for the query. This script resolves each host with hostnames taken from nodedetailnodesNODEDETAILS.pm this helps in eliminating bad hosts if they exist. Before inserting new nodes into the database it checks whether the node is in nodedetail Nodedetail or not. We use ipv4Addr as our unique key. We traverse through the tulip database and check if there exists some node with the same ipv4addr. If it exists we ignore the entry and if not we go ahead with inserting it in the database. It has a use  TULIP::ANALYSIS::NODEDETAILNODES;

create_sites-xml.pl

This perl script is used to create the sites.xml file which is further used in our TULIP project as a source for node information and landmarks. This perl module uses the template library in order to generate the required xml. It traverses through the tulip database and gets each node, checks the service type and generates the file with all the available parameters in the database.  It has a require '/afs/slac/package/pinger/tulip/insert_sites-xml.pl'.

 Tulip Tulip Transition to Sites.XML

Our next step in the process is to transform TULIP so that it can get data from created sites.xml. TULIP version 1 was having two different data sources one to get data from nodedetail database and the other was to fetch data from a list containing the Planet lab sites.

...

After primary evaluation, we run tulip from the command line and test the added nodes for their ping servers. The complete test contains 107 sites to be traced by TULIP client.  This generates the log file with all the successful and failing landmarks.

tulip-log-analyze.pl

 The purpose of this script is to parse the log file and provide us a bird's eye view of the landmarks which are failing or are successful. We then disable all those nodes which do not reply to requests by reflector.cgi. This script also generates the total delay/time taken by each landmark to respond. We will use this parameter for the final selection of our tier 0 landmarks.

...