O'Reilly Network    
 Published on O'Reilly Network (http://www.oreillynet.com/)
 See this if you're having trouble printing code examples


Using Mobile Phones to Model Complex Social Systems

by Nathan Eagle
06/20/2005

Editor's note: Nathan Eagle offers this look at the Reality Mining project, underway at MIT's Media Lab, which demonstrates how using mobile phones can model complex social systems. If this article piques your interest, be sure to check out Nathan's session, titled "Modeling Complex Social Systems: How My Phone Can Predict What I'll Be Doing After This Talk," at O'Reilly's upcoming Where 2.0 Conference. Nathan will provide further discussion of how the data collected from the phones of 100 human subjects at MIT provides insights into the dynamics of both individual and group behavior.

The very nature of mobile phones makes them ideal vehicles to study both individuals and organizations: people habitually carry a mobile phone with them and use it as a medium through which to do much of their communication. Now that handset manufacturers are opening their platforms to developers, standard mobile phones can be harnessed as networked wearable sensors. The information available from today's phones includes the user's location (cell tower ID), people nearby (repeated Bluetooth scans), and communication (call and SMS logs), as well as application usage and phone status (idle, charging, and so on). However, because the phones themselves are networked, their functionality transcends merely a logging device that augments social surveys. Rather, phones can begin to be used as a means of social network intervention--supplying introductions between two proximate people who don't know each other, but probably should.

Research at MIT is being pursued to develop a new infrastructure of devices that are not only aware of each other, but are also infused with a sense of social curiosity. Work is ongoing to create devices that attempt to figure out what is being said, and even infer the type of relationship between two people. The mobile device of tomorrow will see what the user sees, hear what the user hears, and learn patterns in the user's behavior. This will enable the device to make inferences regarding whom the user knows, whom the user likes, and even what the user may do next. Although a significant amount of sensors and machine perception is required, it will only be a matter of a few years before this functionality will be realized on standard mobile phones.

The MIT Reality Mining Experiment

As far as we know, the Reality Mining project represents the largest mobile phone experiment attempted in academia to date. Our study consists of 100 Nokia 6600 smart phones pre-installed with several pieces of software we have developed, as well as a version of the Context application from the University of Helsinki. Seventy-five users are either students or faculty in the MIT Media Lab, while the remaining twenty-five are incoming students at the MIT Sloan business school adjacent to the laboratory. Of the 75 users at the Media Lab, twenty are incoming master's students and five are incoming MIT freshmen. The information we are collecting includes call logs, Bluetooth devices in proximity, cell tower IDs, application usage, and phone status (such as charging and idle), which comes primarily from the Context application (see Figure 1). The study has generated data collected by 100 human subjects over the course of nine months, and represents over 350,000 hours of data on users' location, communication, and device-usage behavior. Upon completion of the study, we plan to release a public, anonymous version of the dataset for other researchers to use.

figure 1
Figure 1. Movement and communication visualization of the Reality Mining subjects. In collaboration with Redfish Inc., we have built a Macromedia Shockwave visualization of the movement and communication behaviors of our subjects. Location is based on approximate location of cell towers, while the links between subjects are indicative of phone communication.

Social Sciences

Where 2.0 Conference.

One particular ramification of living in this new age of connectivity is related to data gathering in the social sciences. For almost a century, social scientists have studied particular demographics through surveys, or by placing human observers in social environments such as the workplace or the school. Subsequently, the tools to analyze survey and observation data have become increasingly sophisticated. However, within the last decade, new methods of quantifying interaction and behavior among people have emerged that no longer require surveys or a human observer. The new resultant datasets are several orders of magnitude larger than anything before possible. Initially, this data was limited to representing people's online interactions and behavior, typically through analysis of email or instant messaging networks.

However, social science is now at a critical point in its evolution as a discipline. The field is about to become inundated with massive amounts of data, not just limited to human behavior in the online world. Soon, datasets on almost every aspect of human life will become available. And while social scientists have become quite good at working with sparse datasets involving discrete observations and surveys of several dozen subjects over a few months, the field is not prepared to deal with continuous behavioral data from thousands, and soon millions, of people. The old methods simply won't scale.

To deal with the massive amounts of continuous human behavioral data that will be available in the 21st century, it will be necessary to draw on a range of fields, from traditional social network analysis to particle physics and statistical mechanics. We are borrowing algorithms developed in the field of computer vision to predict an individual's affiliations and future actions. Tools from the burgeoning discipline of complexity theory will help us gain a better understanding of aggregate behavior. And it is my hope as an engineer that these new insights into our own behaviors will enable us to develop applications that better support both the individual and group.

Phone Usage Statistics

The capture of mobile phone usage patterns for 100 people over an extended time period can provide insight into both the users as well as the ease of use of the device itself. For example, 35 percent of our subjects use the clock application on a regular basis (primarily to set the alarm clock and then subsequently to press snooze an average of 2.4 times per morning for Media Lab students, and .6 times per morning for Sloan business school students), yet it takes ten keystrokes to open the application from the phone's default settings. Not surprisingly, specific applications, such as the alarm clock, seem to be used much more often at home rather than at work. Figure 2 is a graph of the aggregate popularity of the following applications both at home and at work. It is interesting to note that despite the subjects being technically savvy, there was not a significant amount of usage of the sophisticated features of the phone; indeed, the default game Snake was used just as much as the elaborate Media Player application.

figure 2
Figure 2. Average application usage in three locations (Home, Work, and Other) for 100 subjects. The X-axis displays the fraction of time each application is used, as a function of total application usage. For example, the usage at home of the clock application comprises almost 3 percent of the total times the phone is used. The "phone" application itself comprises more than 80 percent of the total usage and was not included in this figure.

Life Log

Where 2.0 Conference

Nathan Eagle
Modeling Complex Social Systems: How My Phone Can Predict What I’ll Be Doing After This Talk

This talk introduces several applications using this data including LifeLog, an automatic diary generation system; and Serendipity, a mobile matchmaking service. Some preliminary results will also be shown to illustrate how standard Bluetooth-enabled mobile phones can be used to model patterns in daily user activity, infer relationships between users, and identify significant locations, as well as to predict individual actions and emergent behavior of teams and organizations.

O'Reilly Where 2.0 Conference
June 29-30, 2005
San Francisco, CA


The last several years have seen search engine companies such as Google, MSN Search, and Yahoo reinvent themselves, from companies that simply perform optimized queries on cached web content to multimedia search engines for all types of content, including images, email, and even files on users' personal computers. With Yahoo's Blog Directories, MSN Spaces, and Google's recent acquisition of Blogger, it is clear that logging and mining the content from users' blogs has become a major priority for the search industry. However, while many bloggers relish the opportunity to transcribe and publish the minutiae of their lives, most people don't take the time to write such diaries.

Nevertheless, simply because most people do not want to spend time manually logging daily experiences does not imply that the logs themselves are undesired. In 1945, Vannevar Bush laid out his vision for the memex, a device that records every detail of human memory and facilitates simple search and retrieval of experiences. While technically infeasible in his era, with the advent of wearable sensors and large amounts of disk space, many researchers today have begun pursuing their own version of the memex: see Clarkson (2002) and Gemmell (2005).

In collaboration with Mike Lambert, we have created an interactive, automatically generated diary application, called LifeLog, that enables users to query their own life ("When was the last time I went out on the town with Mike? Where were we? Who else was there? When did I get home?"). Labels for locations can be input by the user through the web interface, but are also learned from the phone application itself. If a user spends a significant amount of time in the range of a specific cell tower, the Context application makes the phone vibrate and prompts the user to name the location or situation. Examples of user input names include "Media Lab," "My Dorm," "Mike's Apartment," "Club Downtown," and so on. The current version of LifeLog has been redesigned in Java to provide much faster load times and easier navigation, and is shown in Figure 3 below.

figure 3
Figure 3. LifeLog: automatic diary generation. LifeLog provides a visualization of the data from the Reality Mining phone logs and inferences. It has also incorporated the ability to perform "life queries," allowing the user to search through previous events and experiences.

Behavior Prediction

While individuals have the potential for relatively random patterns of behavior, typically there are easily identifiable routines in every person's life. These can be found on a range of timescales: from the daily routines of getting out of bed, eating lunch, and driving home from work; to weekly patterns such as Saturday afternoon softball games; to yearly patterns like seeing family during the holidays. Many of these patterns in behavior are easy to recognize; however, some are more subtle. Our goal is to create a system that can accurately perceive and predict actions in a user's life using data from mobile phones.

We attempt to quantify the amount of predictable structure in an individual's life using an information entropy metric. In information theory, the amount of randomness in a signal corresponds to its entropy, as defined by Claude Shannon in his 1938 Master's thesis at MIT in the equation below.

equation 1

For a more concrete example, consider the problem of image compression (such as the JPEG standard) of an overhead photo taken of just an empty checkerboard. This image (in theory) can be significantly compressed, because it does not contain much "information." Essentially, the entire image could be recreated with the same, simple pattern. However, if the picture was taken during the middle of a match, the pieces on the board introduce more randomness into the image, and therefore, it will prove to be a larger file because it contains more information, or entropy.

Similarly, people who live entropic lives tend to be more variable and harder to predict, while low-entropy lives are characterized by strong patterns across all time scales. Figure 4 depicts the patterns in cell-tower transitions and the total number of Bluetooth devices encountered each hour during the month of January for Subject 9, a "low-entropy" subject.

figure 4
Figure 4. A "low-entropy" (H = 30.9) subject's daily distribution of home and work transitions, and Bluetooth devices encounters during the month of January. The top figure shows the most likely location of the subject: Work, Home, Elsewhere, and No Signal. While the subject's state sporadically jumps to No Signal, the other states occur with very regular frequency. This is confirmed by the Bluetooth encounters plotted below representing the structured working schedule of the "low-entropy" subject.

It is clear that the subject is typically at home during the evening, until 8:00 a.m., when he commutes in to work, and then stays at work until 6:00 p.m., when he returns home. We can see that almost all of the Bluetooth devices are detected during these regular office hours, Monday through Friday. This is certainly not the case for many of the subjects. Figure 5 displays a different set of behaviors for Subject 8. The subject has much less regular patterns of location and in the evenings has other mobile devices in close proximity.

figure 5
Figure 5. A "high-entropy" (H = 48.5) subject's daily distribution of home and work transitions, and Bluetooth device encounters during the month of January. In contrast to Figure 4, the lack of readily apparently routine and structure makes this subject's behavior harder to model and predict.

Calculating life's entropy can be used as a method of self-reflection on the routines (or ruts) in one's life, but it can also be used to compare the behaviors of different demographics. Figure 6 shows the average weekly entropy of each of the demographics in our study, based on their location (Work, Home, No Signal, Elsewhere) each hour. Average weekly entropy was calculated by drawing 100 samples of a seven-day period for each subject in the study. No surprise to most, the Media Lab freshman undergraduates are the most entropic of the group. The freshmen do not come into the lab on a regular basis and have seemingly random behavior. (The entropy of a sequence of 168 random numbers is approximately 60.) The graduate students (Media Lab incoming, Media Lab senior, and Sloan incoming) are next most entropic, respectively. Finally, the Media Lab faculty and staff have the most rigidity in their schedules, reflected in their relatively low average entropy measures.

figure 6
Figure 6. Entropy, H(x), was calculated from the (Work, Home, No Signal, Elsewhere) set of behaviors for 100 samples of a seven-day period. The Media Lab freshmen have the least predictable schedules, which makes sense because they come to the lab much less regular basis. The staff and faculty have the least entropic schedules, typically adhering to a consistent work routine.

Conclusion

The work described in this article is just a sampling of our latest projects using the Reality Mining dataset. We are currently building probabilistic graphical models to classify a user's situation, eigen decomposition techniques to predict what he or she will do next, and classifiers that can infer the user's social network. However, this work should not be thought of as a quest to find a universal equation for human behavior; we are not trying to create a deterministic system allowing us to simply feed data in and have it output an elegant description of future human behavior. Rather, we can attain an increased understanding of complex social systems by an accumulation of examples of how patterns of behavior emerge from the idiosyncratic actions of many individuals. This understanding could not only lead us to building applications that better support the individual and group, but also better inform the design of organizations, schools, and office buildings in such a way that conforms with how we actually behave, and enhance and encourage beneficial social interactions.

There is much more to be done, and it is our hope that this new type of data will inspire research in a variety of fields, including qualitative social science, computational epidemiology, organizational behavior, statistical mechanics, social network analysis, and machine learning.

Nathan Eagle is a postdoctoral fellow at MIT's Media Lab, where he recently completed his Ph.D. His dissertation on machine perception and learning of complex social systems explored the intersections of social network analysis, machine learning, and signal processing. Part of this work was spun out into SenseSix, a mobile matchmaking startup.


Return to the O'Reilly Network

Copyright © 2009 O'Reilly Media, Inc.