Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

 Introduction

Classifying countries by their development is difficult. One has to determine what to measure that is related to development and then measure it. There are then costs and practicality concerning: what can be measured, how useful it is, how pervasive it is, how well defined it is, how it changes over time, whether one is measuring the same thing for each country, and the cost of measuring. Various organizations such as the ITU, UNDP, CIA, World Bank etc. have come up with Indices  based on measured items such as life expectancy, GDP, literacy, phone lines, Internet penetration etc. (see below). These measures take time to gather and so are often dated and only available, if at all,  at widely separated intervals.

...

"The size of the Internet infrastructure is a good indication of a country's progress towards an information-based economy. ...
But measuring the numbers of users is not easy in developing countries because many people share accounts, use corporate
and academic networks, or visit the rapidly growing number of cyber cafés, telecentres and business services. Furthermore,
simply measuring the number of users does not take into account the extent of use, from those who just write a couple of
emails a week, to people who spend many hours a day on the net browsing, transacting, streaming, or downloading. As a
result, new measures of Internet activity are needed to take these factors into account.
 
One indicator that is becoming increasingly popular is to measure the amount of international Internet bandwidth used by
a country - the 'size of the pipe', most often measured in Kilobits per second (Kbps), or Megabits per second (Mbps).  Most
of the Internet traffic in a developing country is international (75-90%), so the size of its international traffic compared to
population size provides a ready indication of the extent of Internet activity in a country."

Credits:
Research & coordination - Mike Jensen mikej@sn.apc.org
Conceptualisation, management and refinement - Richard Fuchs rfuchs@idrc.ca & Heloise Emdon hemdon@idrc.ca
Design, DTP and Layout: Adam Martin lee@wildcoast.com
Background research and liason: Lee Martin & Rochelle Martin lee@wildcoast.com 

An alternative Internet method is the work of Tom Vest of CAIDA comparing the numbers of Autonomous System Numbers (ASN's) related to Internet deployment using BGP for the measurement data (also see Internet Traffic Exchange: Market Development and Measurement of Growth from the OECD). 

The approach we pursue is to use the end-to-end measurements of the PingER project . This has been gathering end-to-end Internet performance measurements since 1995 and currently measures the end-to-end Internet performance of over 125 countries.  The scatter plot below shows the correlation of PingER loss measurements made for the period Jan-Sep 2007 from SLAC to the world, with the GDP/capita (capita  depicting the productivity of a country ) for 2006.Image Removed

Image Added

It is seen that there is a moderate to strong correlation R2 R 2 ~ 0.58. Similar correlations (R2 R 2 ~ 0.52) are seen when one compares PingER derived throughputs losses and jitter vs. GDP/capita.  Stronger correlations are obtained with development indices that are more technology or Internet related. For example if we correlate the PingER performane performance measurements (jitter, loss, throughput) with the GDP/capita and also with one of the more recent and extensive of the indices, namely the ITU's Digital Opportunity Index(DOI) then we get the following R2 valuesR 2 values. Below are seen a table of R 2 for the correlations of pingER measurements with DOI and GDP/cap, followed by the actual scatter plots.

 

Jitter (ms)

Loss (%)

Derived TCP Throughput

Unreachability

DOI

0.58

0.64

0.67

0.65 37

GDP/capita

 0.61

0.53

0.59

0.43 35

DOI Vs Jitter

DOI Vs Loss

DOI Vs Throughput

DOI Vs Unreacability

Image Added

Image Added

Image Added

Image Added

  GDP/capita vs Jitter

GDP/Capita Vs Loss

GDP/capita Vs Throughput

GDP/Capita Vs Unreacability

Image Added

Image Added

See above

Image Added

The advantage of these automated methods is that they are not subjective, they are available at regular intervals so the derivative with time is relativlely relatively easy to track. The disadvantage is that they are only measure one component of development and care has still to be taken to understand the figures and eliminate false readings. Given the strength of the correlation between the PingER performance measurements and those of GDP/capita and other development indices this is a rationale for pursuing this approach. This document outlines various of the survey based Index classification methods and then goes on to compare them with each other and the PingER measurements.

PingER Measurements 

The PingER project has been described elsewhere. Basically it uses the ubiquitous Internet ICMP echo request/response ping facility to measure or deduce metrics such as  Round Trip Times (RTT), jitter, loss, and reachability from about 40 monitoring hosts to over 600 monitored (remote) hosts in about 130 countries. As part of this project we recognized the need to extend the measurements to more countries. This meant adding some more countries. First we identified 84 countries appearing in the various indices that had no hosts monitored by PingER. We addressed categorized these as follows:

  • Where no No measurement was made since its performance was not significanly significantly diffferent from adjacent countries (this . This was particularly so true in Europe. To make easier comparisons in Europe where we added Sweden, Gibralter, Macedonia, Faroe IslandIslands, Austria, Andorra, Bulgaria and Belgium);
  • Where previously there had been little interest, in particular Greeenland;
  • For many counties in Africa it was hard to find reliable hosts that did not block pings and were really located in the country as opposed to a proxy elsewhere. Mike Jensen gave us a list of hosts in 18 African countries where we previously had no PingER remote hosts. Of these Based on his list we were able to extract 13 hosts in 8 countries that appeared to be in the country and also responded to pings. These (Guinea-Bissau, Sierra leone, Seychelles, Mauritius, Liberia, Gambia, Swaziland and Djibouti) were added. We also obtained hosts and contacts from

We also obtained a list of 193 Ookla Speedtest servers in 68 countries together with their and locations (latitude/longitudes). From these we added 135 remote hosts in 66 countries  (excluding hosts that did not ping and hosts in Canada and the United States, where we had adequate coverage for our purposes) to the PingER database. Finally we obtained about 10 hosts in Africa from contacts following the presentation at the  "2nd IHY-Africa Workshop" 11-16 November 2007, Addis Ababa, Ethiopia, and the presentation at the "Internet & Grids in Africa: An Asset for African Scientists for the Benefit of African Society"

Another concern was the fact that the most comprehensive/complete set of measurements was from the SLAC site which measured over 600 hosts. Most other monitors, monitor 200 or fewer hosts. We found biased results for the measurements which rely on distance (e.g. RTT) will have better performance if they are close to SLAC. With the current version of PingER we can centrally maintain a list of Beacons (remote hosts that are monitored by all monitor hosts) and automatically update the monitors with the current Beacons on a daily basis.  There are competing concerns regarding increasing the hosts monitored. Each monitor/remote pair adds an extra 100bits/s to the network. Some countries with monitors, such as Palestine have limited bandwidth available. Thus though we need to increase the Beacon list, we need to do this carefully so we do not abuse monitors or remote hosts in countries with poor connectivity. Therefore, for each country with reasonable connectivity one host is selected that is reliable (based on previous PingER measurements), and represents the country. In addition for sites with limited connectivity we will restrict the ping sizes to 100 Bytes rather than both 100 & 1000 Bytes, i.e. a reduction in traffic by a factor of 10. The idea is to come up with a list of about 120 Beacons covering most countries, thus roughly doubling our current list.  For more on this see PingER Beacons Expansion

 Indices Used By The Report

Gross Domestic Product per capita

The gross domestic product or GDP is a way of measuring the size of a region's economy. It is usually normalized by dividing by capita. It is often compared with the purchasing power parity (PPP) of the currency relative to the US$.  The terminology for GDP is changing to Gross National Income GNI, see the World Bank's Data and Statistics web page. There are measures from the World Bank and the Central Intelligence Agency among others, we are using https://www.cia.gov/library/publications/the-world-factbook/rankorder/2004rank.html

Human Development Index (HDI)

The UNDP  Human Development Index  report of 2006 was compiled on data from 2004 and covered 175 UN member countries (out of 192). It is a comparative measure of life expectancy, literacy, education, and standards of living for countries worldwide. More specifically:

...

It is a standard means of measuring well-being, especially child welfare. It is used to distinguish whether the country is a developed, a developing, or an under-developed country, and also to measure the impact of economic policies on quality of life. The index was developed in 1990 by Pakistani economist Mahbub ul Haq.

Digital Access Index (DAI) 

The Digital Access Index (DAI) from the ITU has data from  1995   to 2003. It combines eight variables, covering five areas, to provide an overall country score. The areas are availability of infrastructure, afordability of access, educational level, quality of ICT services, and Internet usage. The results of the Index point to potential stumbling blocks in ICT adoption and can help countries identify their relative strengths and weaknesses.

Digital Opportunity Index (DOI)

In 2006 the ITU submitted the Digital Opportunity Index report for 180 economies worldwide. The Index monitors the mobile communications that promise to bridge the digital divide in many parts of the world, as well as more recent technologies such as broadband and mobile Internet access.

Network Readiness Index (NRI)

The Network Readiness Index (NRI) was used in the World Economic Forum's Global Information Technology Report 2007- 2007. It covers about 120 countries. It rests on three main subindexessub-indexes:

  • the The presence of an ICT-conducive environment in a given country by assessing a number of features of the broad business environment, some regulatory aspects, and the soft and hard infrastructure for ICT;
  • the The level of ICT readiness and propensity of the three main national stakeholders---individuals, the business sector, and the government; and
  • the The actual use of ICT by the above three stakeholders.

Technology Achievement Index (TAI)

The United Nations Development Programme (UNDP) introduced the Technology Achievement Index (TAI) in 2001 to reflects a country's capacity to participate in the technological innovations of the network age. It contains data from 1995-1000 andf 2000 and covers 72 countries. The TAI aims to capture how well a country is creating and /diffusing technology and building a human skill base. It includes the following dimensions: Creation of technology (e.g. patents, royalty receipts); diffusion of recent innovations (Internet hosts/capita, high & medium tech exports as share of all exports); Diffusion of old innovations (log phones/capita, log of electric consumption/capita); Human skills (mean years of schooling, gross enrollment in tertiary level in science, math & engineering).

Digital Opportunity Index (DOI) and Opportunity Index (OI)

In 2006 the ITU submitted the Digital Opportunity Indexreport for 180 economies worldwide. It is related to the ITU's earlier DAI. It is based on 2004/2005 data. It uses 11 indicators each normalized for population or homes. This include coverage by mobile telephony, Internet tariffs, # computers, # fixed line phones, # mobile subscribers & # internet users. 

A related Index is the ICT Opportunity Index (OI). This is based on many of the indicators used by the DOI but adds TVs, literacy, e-student enrollment & International bandwidth. It has indices for 1996-2003 and covers 139 economies. This and the DOI are the only two indices specifically endorsed by the WSIS for use in the approved evaluation methodology.

Due to the DOI being one with the most recent results, having a large coverage and being blessed by the WSIS, we tend to prefer this one at the moment.

Corruption Perception Index

Since 1995, Transparency International has published an annual Corruption Perceptions Index (CPI)1 ordering the countries of the world according to "the degree to which corruption is perceived to exist among public officials and politicians".2 The organization defines corruption as "the abuse of entrusted power for private gain".3

The 2003 poll covered 133 countries; the 2007 survey, 180. A higher score means less (perceived) corruption. The results show seven out of every ten countries (and nine out of every ten developing countries) with an index of less than 5 points out of 10.

Happy Planet Index

The Happy Planet Index reveals the ecological efficiency with which human well-being is delivered

The index combines environmental impact with human well-being to measure the environmental efficiency with which, country by country, people live long and happy lives. Learn about the ideas behind the HPI, how it is calculated, why we need it and what it can teach us. Below is an Excel correlation plot of the HPI vs PingER's normalized derived throughput. It is seen that there is little correlation.

Image Added

Summary

The following table summarize the indices based on their source, coverage and currentness.

Abbreviation   

Name

Organization

Number of countries

Date of Data

GDP

Gross Domestic Product per capita

  CIA 

229

2001-2006

HDI

Human Development Index

UNDP

175

2004

DAI

Digital Access Index

  ITU

180

1995-2003

NRI

Network Readiness Index

World Economic Forum

120

2007

TAI

Technology Achievement Index

UNDP

72

1995-2000

DOI

Digital Opportunity Index

ITU

180

2004-2005

OI

Opportunity Index

ITU

139

1996-2003

CPI

Corruption Perception Index

Transparency Organization

180

2007

Of these indices we chose to focus on the HDI since it measures human development and the DOI since it is still being developed by the ITU, it represents the technical/economic development, it has recent data and has an extensive coverage.

Correlations Between Indices 

 These are shown below:

DAI

TAI vs DOI

NRI vs DOI

DAI vs DOI

Image Added

Image Added

Image Added

Image Added

 DOI vs GDP

HDI vs DOI

 GDP/cap & DOI vs CPI

 

Image Added

Image Added

Image Added

 

Indices trends

One index that has data going back a few years is the ICT OI.   The OI trends for the world regions is seen below.

It is interestinbg to see the linear growth vs the exponential growth of PingER. I suspect they take the log of some development component that goes into the OI which makes the exponential linear.

I added a trendline (linear fit to the East Asia data). Latin America, SE Asia & Oceania (this is driven all the Oceanian countries other than NZ & Aus, in reality we ought to weight the numbers by the populations before aggregating) are about 8 years behind, and Central Asia, S. Asia and Africa about 10 years behind.

There is little evidence that S. Asia is improving relative to the rest. Africa is falling behind.

The developed regions N. America, Europe, E Asia are pulling away (greater slope) from the other regions.

A linear fit cannot be right else at some time in the past the OI must have been negative.

Image Added

Maps

Some maps of the index values are seen below:

GDP/capita

PPP

Human Devlopment Development Index

Digital Opportunity Index PingER Deployment

 

 

 

 


Image Removed

International Bandwidth

ICT OI for 2001

ICT OI for 2005

CPI for 2007 (from Wikipedia)

Image Added

Image Added

Image Added

Image Added

 

 

PingER Min_RTT

PingER Throughput

 

 

 

 

Image Added

Image Added

PingER Deployment

PingER Unreachability

PingER Jitter

 PingER Loss

Image Added

Image Added

Image Added

Image Added

The maps for ICT OI indicate that from 2001 to 2005: Latin America, N. Africa, India, Australia, New Zealand, South Europe, Russia, China are improving. Little improvement is seen in the Sub-Saharan Africa region.

PingER metrics

These are described in the Tutorial on Internet Monitoring and PingER at SLAC.

Normalized Derived TCP Throughput 

The normalization is to reduce the impacts of the derived throughput being proportional to 1/RTT. Thus sites close to the measurement/monitor host will have better derived throughputs.  Thus we calculate:

normalized_throughput=throughput * minimum_rtt(for remote_region)/minimum_rtt(monitoring_region)

Comparing this to the GDP/capita we get the scatter plot below. The correlation is seen to be moderate to strong (R2 ~ 0.59). The figure also identifies some of the major outliers. Those  countries below the line are usually well developed but hard to get to countries (such as Finland, Iceland) or wealthy countries that have not full developed their Internet access (e.g. UAE)
Image Added

Comparisons with loss, jitter and unreachability are shown below together with Jitter vs Loss. I can see why unreachability may not correlate well with loss or jitter, since the unreachability is often an end site/host problem. I am somehwat surprised by the lack of a strong correlation between Jitter and Loss and need to think a bit more deeply about it.  I would have expected a correlation between jitter and loss. Basically jitter is classically caused by router queuing (due to the output link being busy with another packet)  delaying packets. If the output link does not clear then the queue fills with more packets waiting to be sent and newly arriving packets are lost, hence expect correlation. This can be caused by one or more fast links trying to feed a slower or more congested link (e.g. to Developing Country). There can of course be other reasons for loss, such as noise & db loss (especially with wireless and probably satellite circuits) which are not correlated with queuing. I would expect to see the latter causes mainly in places where networking is pooor (e.g. Developing Regions). I see no evidence for the correlation being stronger or weaker at larger values.  At some time I will dig deeper by looking at outliers to see if I can see rhyme or reason.

Throughput vs Jitter

Throughput vs Loss

Throughput vs Unreachability

Jitter vs Loss

Image Added

Image Added

Image Added

Image Added

Our focus is more on DOI because its data is more current, it is in active development by the ITU, and it covers most of the countries. Comparison of PingER parameters with the DOI are shown below:

Throughput vs DOI

Loss vs DOI

Unreach vs DOI

 

Image Added

Image Added

Image Added

Comparisons with GDP/capita are shown below.
 

Loss vs GDP/cap

Throughput vs GDP/cap

Jitter vs GDP/cap

Unreach vs GDP/cap

Image Added

Image Added

Image Added

Image Added

Throughput vs International Bandwidth 

Comparisons with International bandwidth for 2005 are shown below. The differences in absolute values is to be expected since many end sites are sharing the international bandwidth. Thus the international bandwidth typically needs to be larger than the last mile bandwidth and the latter is often what dictates the overall bandwidth of a connection. Also the last mile is often congested so a typical session  will only get a fraction of it. A further  factor is that the PingER throughput is derived from the Mathis formula (TCP throughput = 8*1460/(RTT*sqrt(loss)) which assumes loss is driven by the TCP congestion algorithm whereas we are using ping to sample it.

Throughput vs Int. BW

 

 

 

Image Added

 

 

 

We need to look at the outliers to see why they do not correlate well.  Possibilities that come to mind are:

  • The international bandwidth may be high but the last mile might not. Often the last mile dominates performances. E.g. Pakistan had a backbone of 155Mbits/s (see the South Asia Case Study) which was hardly used, not congested at all. They proudly showed me it.  There were 3 International access points with 57Mbps, 33Mbps & 65Mbps all feeding into a single international cable, thus there was plenty of bandwidth but a single point of failure. However when one dug in we found that the end sites had 1-2 Mbits/s for typically hundreds of people and were heavily congested.
  • It is also possible that the International Capacity measurement was at a different time to the PingER measurement.
  • PingER is measuring mainly to Academic and Research sites, whereas the international bandwidth is probably shared by all.
  • The host that PingER is monitoring may not be in the location (e.g. the country) it is advertised as being in. This is especially a problem when there are few hosts being monitored in the country.

The relative residuals between the trendline and the observed PingER throughputs are shown in the figure below. They are calculated as:

Relative residual = (Observed(Norm throughput) - Theoretical(Norm throughput))/ Theoretical(Norm throughput)

where Theoretical(Norm throughput) = A * International bandwidth ^ B

and A and B are from the trendline fit.

Image Added
 
The largest outlier is the one host measured by PingER in Libya (mail.lttnet.net). The minimum RTT of ~ 190ms measured from SLAC is appropriate for a host in Libya. Geo IP Tools indicates the host is in Libya. The losses are low so this drives the relatively high throughput (throughput ~ 8*1460/(RTT*sqrt(loss)) kbits/s). The top level domain of .net indicates this may be a nework provider. In fact LTTnet is Libya Telecom & Technology. Typically network providers will have better connections than end-site universities. Also it appears to be an important mail service for the country and so have optimum connectivity. It has been extremely hard to find a host in Libya that responds to pings.  However, it appears we do need more hosts there to resolve the anomalously high performance of the one host we have. If one removes this anomalous point from the data then the trendline fit R^2 goes from 0.59 to 0.62.

The next largest outliers are Estonia, Australia and New Zealand. The latter 2 have been improving their connectivity recently. Since the International Bandwdith measurements are from 2005,  it may be if one compares the normalized throughput for 2006 vs the international bandwidth the residuals for Australia and New Zealand are < 0.5. 

Estonia for 2006 still stands out at a residual of over 2. This may be partially be as a consequence of Estonia's ICT success story due to Government-led initiatives, it being the only Central and Eastern transition economy to make it to the top 25 of the WSIS DOI for 2007 as well as having the highest Internet and broadband penetration in Central and Eastern Europe.