Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Wednesday July 1st  2015 9:00pm Pacific Standard Time, Thursday July 2nd  2015  9:00am Pakistan time, Thursday July 2nd 2015 noon Malaysian time, Thursday  July 2n32nd, 2015 02:00am Rio Standard Time.  

...

Hassaan Khaliq?, Kashif, Raja,  Samad Riaz? (SEECS); Johari?+, Nara, Adnan Khan (UNIMAS); Abdullah, Badrul, Anjum, Ridzuan?, Ibrahim? (UM); Hanan, Saqib (UTM); Adib?+, Fatima+ (UUM); Fizi Jalil (MYREN)?;  Thiago+, Les+, Bebo+ (SLAC)

...

  • Membership of pinger-my in https://groups.google.com
  • Arshad has left SEECS/NUST to become the Rector of the National Textile University in Faisalabad.  We have contacted the Rector at NUST to discuss continued support for PingER at SEECS. He has been very supportive and responded as follows: 

    We will surely continue with PingER project. Dr Zaidi who has succeeded Arshad has spent as many years in NIIT and later SEECS as Arshad. He will take care of the project.

    I have copied this email to Director Research and Director Academics too. They will fully support Zaidi. Hope your worries are removed. If you find any issues, please give an email to me and I will ensure that we achieve all the targets of this project. Thanks and regards.
    I have emailed Dr Zaidi. 

    Acting principal (Engineer Habeel) asked for details about project that Anjum provided. They have also contacted Hassaan and asked him to be faculty in-charge of the project. Dr. Zaidi and Habeel Ahmed are not clear of the status and direction of the project so Dr. Arshad told them he can discuss it during his next visit of SEECS. That is what I last know. I think Dr. Arshad visits SEECS every 15 days or so and is helping with smooth transition.  

    Dr Zaidi writes: 

    Thank you for your email and for posing confidence in NUST-SEECS for the collaboration that has been on-going for a number of years and has only grown stronger with time. It is indeed a pleasure for us here at NUST-SEECS to be working in collaboration with your team at SLAC. We certainly value this collaborative effort and look forward to take it to newer heights.

    I have recently conducted a special review meeting to ensure that things are back on track. Through this email, I would like to reiterate our commitment to this collaboration. We assure you of our full support for taking this a step further. As for your requirement of a System Administrator and a Faculty member in-charge of this project, you will be happy to know that Dr Hassan Khaliq -- a faculty member at SEECS who is already involved in the project, has been made responsible to manage the affairs from our end. In addition, we have deployed an MS research student to assist him in this effort. We have also raised the requirement of a full-time resource for this project and we are hoping to have this resource with us soon 

     

    Please do not hesitate to contact me directly should you face any problem from this day forward.

  • Anjum How are the measurements, analysis and paper on GeoLocation coming along?

    • He has been looking at the alpha (directivity) behavior, there was an exponential behavior, but it was unclear how to take advantage of it. Anjum cut N. America into regions to facilitate improving the accuracy of the alpha prediction. He believes this will improve things.

    • Anjum has not started on this yet, he hopes to get to it when he returns from Canada
  • Johari has got the OK from the conference organizing committee to hold a colocated PingER/BigData workshop on August 3rd the day before the  CITA 2015 (see http://www.cita.my/ an International Conference 4th - 6th August 2015, on transforming Big Data into Knowledge. Johari will provide relevant information to Bebo. Bebo will be able to make a presentation. Les has sent Bebo some relevant slide decks. Johari has  an abstract from Bebo. There are PingER related papers submitted from UM

...

Maria Luiza has requested Raphaela and Christiane to give an update on what they've been working on in the last weeks, and maybe talk with Thiago on how they've been using our their local cloud of 4 nodes with Cloudera and a data cube version of the ontology.

...

Christiane's report is at: Size Inflation of PingER Data for use in PingER LOD

UUM

Adib reports 56/231/2015:

"I did follow up this issue personally as I promise. I spent the whole day on Thursday (07 May 2015) with the network engineer from the computer center to troubleshoot this connectivity issue. I was hoping to report good news last meeting, unfortunately, without any tangible output. In fact, there is no issue at my side with UUM PInger server, except the connectivity. Even UPS, I am waiting to get one soon to avoid the electricity problem.   

As a result of this,  we find out that there is a problematic switch connects UUM PInger server with the center. I can not do much here cos it may involve buying a new switch/approval from their boss. I am waiting for update and follow up on this."

PingER UUM problem has solved half way.

Fatima has installed hadoop on all three machines. One will stand as the host machine from which the remote machines will be controlled. She is Fatima has have installed hadoop on all three machines. One will stand as the host machine from which the remote machines will be controlled. She is currently trying out some MapReduce examples and Hive installation.  

...

Ibrahim had downloaded PingER in Zip files format, however, when he stored them in the Hadoop distributed file system (HDFS) and try to process them, the file got corrupted, so he had to extract the file, but one file zip has more than 10000 zip files with small size.  So he is trying to create a mapreduce job which can accept zip format, that will save lot of his time. Currently mapreduce can only read from files like .txt, and any doc file format or database. He will have meeting with Dr. Anjum on 11 of june asking for advice and seeking of how we can work on this together . Update?

Renan reports: "I had a similar experience. HDFS works better with bigger files rather than many small files. What I did was to create a Map-Reduce job to reduce all those thousands of small files into only 17 big files, each of them containing all data for a given year [1998-2014].I didn't use Hadoop MapReduce for this, though. I used a different dataflow distributed engine that also implements map and reduce operators. I am working on providing Thiago these 17 big files. Once he gets the data, he can share them with you and explain how the data on each file are stored".

...