MPhil and PhD degree supervision

Posted December 5, 2013 by Khedo
Categories: Research

I am currently supervising MPhil and PhD degree projects at the University of Mauritius in the areas of Mobile Computing, Wireless Sensor Networks (WSNs), Context-Aware Systems and Social Computing.

I am particularly interested in the implementation of the following projects:

  • Agricultural and Environmental Sensor Networks
  • Context-Aware Systems for Visually Impaired Persons
  • Pervasive and Mobile Health Systems
  • Mobile Disaster Management Systems
  • Social Media Analytics
  • Privacy issues in Online Social Networks

Contact me for more information: k.khedo AT uom.ac.mu

Key Differences Between Web 1.0 and Web 2.0

Posted August 22, 2008 by Khedo
Categories: Uncategorized

Among Web 2.0’s key attributes are the growth of social networks, bi-directional communication, diverse content types, and various “glue” technologies, and the authors note that while most of Web 2.0 shares the same substrate as Web 1.0, there are some significant differences. Features typical of Web 2.0 Web sites include users as first class entities in the system, with prominent profile pages; the ability to connect with users through links to other users who are “friends,” membership in various types of “groups,” and subscriptions or RSS feeds of “updates” from other users; the ability to post content in various media, including blogs, photos, videos, ratings, and tags; and more technical features, such as embedding of various rich content types, communication with other users through internal email or instant messaging systems, and a public API to permit third-party augmentations and mash-ups. Web 1.0 metrics of similar interest in Web 2.0 include the general portion of Internet traffic, numbers of users and servers, and portion of various protocols. About 500 million users reside in a few tens of social networks with the top few responsible for the bulk of the users and traffic, and traffic within a Web 2.0 site is more difficult to measure without help from the site itself. The challenges for streamlining popular sites for mobile users differ slightly between Web 1.0 and Web 2.0, in that instant notification to users through mobile devices can be facilitated because of the short or episodic nature of most Web 2.0 communications. Most communication in Web 2.0 is between users, so Web 2.0 sites have no easy way to select during overload; however, the sites apply varying restrictions to guarantee that overall load and latency is reasonably maintained. Some of the Web 2.0 sites are eager to maximize and retain members within an “electronic fence,” which can facilitate balkanization, although total balkanization is likely to be prevented by a countercurrent stemming from the prevalent link-based nature of Web users continuously connecting to sites outside the fence. The authors point out that there are substantial challenges in permitting users to comprehend privacy implications and to simply represent usage policies for their personal data.

First Monday (06/08) Vol. 13, No. 6, Cormode, Graham; Krishnamurthy, Balachander

Designers on Quest to Build $12 Computer

Posted August 8, 2008 by Khedo
Categories: Computing, Research

A group of computer designers at MIT’s International Development Design Summit are trying to develop a $12 computer. Derek Lomas is basing the computer on a device he saw people using in Bangalore, India, in which a cheap keyboard was combined with a Nintendo-like device and connected to a home TV. Lomas and others at the MIT symposium hope to improve the system, based on old Apple II computers, to have rudimentary Web access and other features. “We see this as a model that could increase economic opportunities for people in developing countries,” Lomas says. He thinks that with some help from programmers, the Apple II computers can be developed into more capable devices and give schools in third-world countries computer labs. A six-member team at the MIT conference is working on writing improved programs and connecting the devices to the Internet using cell phones. The group also wants to add memory chips to allow users to write and store their own programs. Apple II enthusiasts have been recruited to help with the programming, and the group has contacted an Indian nonprofit that has expressed interest in using the devices.
Boston Herald (08/04/08) Kronenberg, Jerry

Little Sensors Are Heavyweights in Rainforest Information Gathering

Posted July 9, 2008 by Khedo
Categories: Research

University of Alberta scientists are creating wireless sensor networks for use in remote locations. One of the first projects, called ECOnet, will place small sensors in rainforests in Brazil and Panama this fall. The sensors will form a network that will create a 3D image of what is happening in the atmosphere, says Alberta professor Arturo Sanchez-Azofeifa. He says the sensor network is like taking a MRI of the forest. To test the system, six sensors have been placed in the university’s Atrium Oasis, a tropical display greenhouse. Data collected during ECOnet is available online for anyone to examine, allowing scientists to collect information daily from remote locations without having to travel. “You can take a snapshot of the environment you’re monitoring, or you can get more, long-term data, which will allow scientists to determine if there are certain patterns that emerge or if there are any anomalies occurring,” Sanchez-Azofeifa says. The sensors are still evolving, and Sanchez-Azofeifa expects them to become smaller and less expensive to the point where it may be possible to fly over a location and drop thousands of the sensors into the canopy. The sensors are currently powered by small lithium batteries and have a life of about three years, though that may change as well. Sanchez-Azofeifa says the school’s computer engineers are working on using the motion of the forest to power the sensors.

University of Alberta (06/02/08) Poon, Ileiren

Are We Closer to a ‘Matrix’-Style World?

Posted May 12, 2008 by Khedo
Categories: Uncategorized

Virtual reality technology is progressing rapidly thanks to advances in computing power and graphics, and some researchers believe a “Matrix”-style world where it is difficult to distinguish the real from the virtual is right around the corner. “We’ve reached a level now where we can make very realistic images: Five to 10 hours to make images more or less perfect, where people say, ‘Wow, that’s a photograph!'” boasts University of California at San Diego professor Henrik Wann Jensen. He says achieving the same level of photorealism in real-time animation is upcoming thanks to new graphics processors. Jensen is tackling the challenge of power efficiency by slashing the computational costs of photon mapping and ray-tracing algorithms. Whereas previous techniques sampled photons randomly across a light source, Jensen’s method maps the relevant photons along the light’s entire route, allowing a graphics interface to follow the light around a scene and measure the degree of light absorbed, reflected, or scattered by other objects. A notable achievement in touch-based interface technology has been facilitated by a professor at Carnegie Mellon University’s Robotics Institute using magnetic levitation, in which a device hovers about its base using magnetic fields, while the position and orientation of a virtual object on a computer display can be manipulated by a handle. The object’s interaction with obstacles is simulated tactilely by haptic feedback generated by electrical coils. To advance technology that could lead to empathetic virtual characters, researchers have developed the Sensitive Artificial Listener, which can maintain a human-computer dialogue for prolonged periods by employing its sensitivity to non-linguistic signals as well as a repertoire of verbal and non-verbal cues and statements.
Click Here to View Full Article
MSNBC (05/05/08) Nelson, Bryn

Seven “Grand Challenges” Face IT in Next Quarter-Century

Posted April 18, 2008 by Khedo
Categories: Computing, Research

Gartner has identified seven technologies that will “completely transform” the business world over the next 25 years. The technologies include parallel programming, wireless power sources for mobile devices, automated speech translation, and computing interfaces that detect human gestures. “Many of the emerging technologies that will be entering the market in 2033 are already known in some form in 2008,” Gartner says. Gartner predicts more natural computing interfaces that can detect gestures and compare those gestures in real time against a gesture “dictionary” that tells the computer what action to take. Mobile devices will no longer have to be charged as power will be transferred by a remote source, eliminating the need for batteries. Researchers will develop persistent and reliable long-term storage that will store the world’s digital information on digital media permanently. To create reliable storage that can last 20 to 100 years, researchers need to overcome challenges related to data format, hardware, software, metadata, and information retrieval. Programmer productivity will increase 100-fold, with the output of each programmer increasing to meet future demands fueled by an increasing reliance on software development products. The reuse of code will help, but optimizing the reuse of code is a challenge in itself. Gartner also predicts that IT workers will be able to provide exact financial outcomes for IT investments.

Gartner Says
Network World (04/09/08) Brodkin, Jon
Click Here to View Full Article

Industry Giants Try to Break Computing’s Dead End

Posted March 25, 2008 by Khedo
Categories: Computing, Research

 ubiquitousapps.jpg

Intel and Microsoft yesterday announced that they will provide $20 million over five years to two groups of university researchers that will work to design a new generation of computing systems. The goal is to prevent the industry from coming to a dead end that would halt decades of performance increases in computers. The researchers’ efforts could enable the development of new kinds of portable computers that will help computer engineers address a variety of challenges, including speech recognition, image processing, health care systems, and music. The research grant will be used to create independent laboratories at the University of California, Berkeley and the University of Illinois, Urbana-Champaign. Each lab will work to reinvent computing by developing hardware, software, and a new generation of applications powered by computer chips containing multiple processors. The research effort is partially motivated by an increasing sense that the industry is in a crisis because advanced parallel software has failed to emerge quickly. The problem is that software needed to keep dozens of processors busy simultaneously does not exist. Although much industry discussion has focused on centralized cloud computing, the new research labs will instead aim to create breakthroughs in mobile computing systems. Professor David Patterson, past president of ACM, will head the new Universal Parallel Computing Research Center at Berkeley, while the Illinois lab will be led by professors Marc Snir and Wenmei Hwu.

New York Times (03/19/08) P. C2; Markoff, John
Click Here to View Full Article


Follow

Get every new post delivered to your Inbox.