Abstracts
Title: Continuous Geospatial Monitoring of Catastrophic Natural Disasters using Twitter
Recent studies have shown that people tend to communicate their first-hand experiences on Twitter (a popular micro-blogging website) during crisis events. Unlike news reports, Twitter feeds contain real-time information, updated by people in various locations during the crisis. However, tweets present two primary problems. First, tweets are noisy and unstructured, making it non-trivial to automatically identify the important, time-sensitive information. Second, only a small number of tweets include accurate locational information.
This project has three primary goals. First, we use tweets to automatically detect crisis events in real-time. Second, we plan to identify locations from tweets. Third, once a crisis event has been detected, we automatically identify newsworthy messages related to the event that can help crisis management teams monitor the unfolding event and deploy resources.
Recent Progress
To identify in the Twitter stream that an event has occurred, we develop automatic methods to generate "disaster signatures", which consist of event words generic to the event class and specific words pertaining to the current instance. For example, an earthquake signature includes the generic words "earthquake", "shake", "magnitude", etc., as well as the specific location, and time. Our signatures can successfully be used to retrieve and construct a timeline of the disasters that have happened and been actively discussed on Twitter.
Ongoing Work
We are working now to identify the locations of tweets. There are two approaches: when the tweet includes GPS coordinates (explicitly), and when other locational clues appear in the text (implicitly). Combining these, we can estimate the locations of various events detected on Twitter during the disaster. Such information may be invaluable to crisis management teams and first responders who can be deployed to more precisely targeted locations.
Title: Linking and Summarizing Textual Information In a Geo-Spatial Information Display System for Response and Recovery
Project Scope
Geospatial display systems play an important role in response and recovery efforts when crisis happened. Crisis management teams can visually browse the affected area and fully utilize the transportation infrastructures to respond and recover. Although geospatial imagery and maps show geometric relations among entities, they cannot be used to present other kinds of knowledge about the temporal, topic, and other conceptual relations and entities.
We are collaborating with GeoSemble Inc. on this project with two primary scopes. First, GeoSemble develops methods to automatically link different types of textual information to the entities on a map. Second, we summarize the large amounts of textual material linked to each entity (buildings, roads, bridges, etc.) to describe what the information (topics, events, statistics, etc.) about it contained in the text.
Recent Progress
We develop automatic methods to link maps and associated imagery with textual information based on various features in the text, such as addresses, business names, road names, etc. Prototypes of the system have been implemented in various cities across the U.S., such as Los Angeles, San Francisco, Washington D.C., etc.
However, the huge amounts of textual material linked to the map prevent crisis management teams to find the most important information efficiently. Given the common limitations of display space in most geospatial display systems, we automatically summarize the textual information in a hierarchical way to display incrementally longer summaries for each point of interest. A very short 'thumbnail' (one or two single keywords, e.g., 'heavy traffic') can be found at the top level to categorize the documents associated. Longer summaries from sentences to paragraphs can be explored further.
Ongoing Work
While continuously improving the linking and summarization performance, we are also conducting user studies to find the best way of displaying the summaries. We conduct eye-tracking experiments to monitor user's eye movements in different tasks. By analyzing the eye-movements, we want to know the appropriate level of compression to perform for summarization, and the optimal way to place and display the summaries (e.g. clusters, trees, etc.) on the map.
Title: Affect Segmentation and Recognition by Fusion of Facial Features and Body Gesture
Problem Statement: Automatic affect recognition can be applied to many real-world applications including transportation security, lie detection, video surveillance, intelligent tutoring, and human-computer interaction etc. Affect recognition from facial features has been widely studied before. However, affect recognition through the body gesture has just attracted attention recently, inspired by the psychology study [1].
Discussion on Methodology: The methodology we proposed is to combine both facial features and body gesture together in affect recognition. Two simple features, i.e. motion area and neutral divergence, are used to temporally segment an expression into neutral, onset, apex and offset phases. The video frames in the apex phase are then selected for the affect recognition using Histogram of Gradient (HOG) features on both face and body gesture.
Discussion on data collection techniques: We conduct the experiments on a bi-modal face and body benchmark database FABO [2]. There are 10 expressions in the experiments, including both basic expressions and non-basic expressions. Basic expressions are "Disgust", "Fear", "Happiness", "Surprise", "Sadness" and "Anger". Non-basic expressions are "Anxiety", "Boredom", "Puzzlement" and "Uncertainty". There is total number of 288 expression videos.
Results: Experiments shows promising results on the proposed approach. The 3-fold cross validation of temporal segmentation detection rate is 83%, which exceeds the state of the art performance [2] by almost 3%. Comparing to the state of the art performance [2] with facial features only, our HOG feature based affect recognition rate has improved by 8.4%.
Conclusion, future research and reference: The preliminary experiments show promising results on the temporal segmentation of an expression and facial feature based affect recognition. Future research will focus on body gesture feature and effective fusion framework to incorporate both facial feature and body gesture in affect recognition.
[1] N. Ambady and R. Rosenthal, "Thin slices of expressive behavior as predictors of interpersonal consequences: A meta-analysis," Psychol. Bull., vol. 11, no. 2, pp. 256?274, 1992.
[2] H. Guenes and M. Piccardi, "Automactic Temporal Segment Detection and Affect Recognition From Face and Body Display", IEEE Transaction on Systems, Man and Cybernetics - Part B: Cybernetics, Vol. 39, NO. 1 2009.
[3] S. Chen, Y. Tian, Q. Liu, and D. Metaxas, Segment and Recognize Expression Phase by Fusion of Motion Area and Neutral Divergence Features, IEEE Conference on Automatic Face and Gesture Recognition, 2011
Title: Virtual Worlds and Human Behavior: A case study in health risks
Current medical theories explaining rates of high risk behavior among adolescents - including drug experimentation and abuse ? blame irrational assumptions of "invincibility" to the risks. Public health efforts have therefore sought to debunk these impressions by focusing explicitly on the risks, assuming that if only individuals could be convinced of their own susceptibility, they would avoid adopting risky behaviors. However, this assumption of inappropriate risk assessment may not be the only explanation for adolescents accepting higher levels of exposure to risk than adults faced with the same choices.
As a result of health-related behaviors observed during an introduced outbreak of infectious disease (the Whypox) in the virtual world of Whyville, we propose an alternative hypothesis that still fits the available evidence: due to the strong social and emotional bonds that characterize the pubescent developmental ages, certain populations may view poor medical outcomes among the benefits, rather than the costs, of risky behaviors. That is, the desire to connect emotionally with already-affected peers may motivate adolescents to engage in high-risk activities, despite their awareness of the detrimental health effects associated with such behavior.
We will describe the original Whypox outbreak and the observed reactions that lead us to propose this hypothesis, as well as our plans to design subsequent outbreaks to further investigate health-related social behavior. We will then discuss how this new perspective could translate into alternative public health efforts to curb high risk behaviors among teens. Lastly, we will discuss how virtual world settings provide access to insights (such as those presented) into human behavior that are otherwise inaccessible to scientific investigation.
Title: The AWSoMe Project: Alerts and Warning with Social Media
Social media, as we see in the Middle East uprisings, can facilitate the spread of valuable information among 'connected' persons, who can in turn inform or learn from those around them. The AWSoMe project is designing experiments to learn how the power of these media can used to increase the effectiveness of first responders. The project reveals the complex balance among the goals of providing useful information for analysis; providing findings that will be of value for the first responder community; and meeting the constraints of public order on campus. Thus, for example, we can not study how well students might detect and disseminate word about a threat by sending masked men with toy guns into the campus center. We report on progress in designing a suitable pilot experiment, modeled on the DARPA Network challenge. The goal of this pilot is to provide preliminary data for analysis, and to inform us as to the range of technologies and strategies that students will spontaneously invent or adopt, when faced with an alerts and warnings type of task.
Joint with Nina Fefferman, Rutgers, Eduard Hovy, ISI, and William (Al) Wallace, RPI.
Title: JIT-Transportation Model Applied to Homeland Security Problems
We are proposing to apply a Just-in-Time (JIT) transportation model [1] to homeland security transportation problems to evaluate risk management due to natural disasters, man-made disasters, or terrorism. The model is a new algorithm for solving goal and nonlinear programming problems. It is novel in that it puts priority on shipping and delivery times, which are critical when responding to catastrophic events. The transportation network is large and dynamic; flow can fail randomly due to after effects of a disaster, and flow can return to service after repairs. The model will be used to help assess the effectiveness of risk management across natural disasters, man-made disasters, terrorist threats, and target domains (professional athletic stadiums, concert halls, etc,). The model can possible link to CCICADA's evacuation tools and transportation system projects. See [2] and [3] for more information on JIT models.
[1]G.Z Bai and X. Gan, JIT-Transportation Problem and Its Algorithm, International Journal of Systems Science, Forthcoming (First published on: 10 December 2010).
[2] N. Runge and F. Sourd, A New Model for the Preemptive Earliness-Tardiness Scheduling Problem, Computers & Operations Research, 2009, 36, pp 2242-2249.
[3] G.Z. Bai and X. Gan, The Time limit Assignment Problems, International Journal of Applied Mathematics and Statistics, Vol. 13, No. S08, 2008, pp. 31-40.
Title: Infectious Disease and Families: The effect of long-term social affiliations on the evolution of social complexity in the face of epidemics
Indirect benefits to individual fitness in social species can be influenced by a broad variety of behavioral factors. Behaviors which support the fitness of kin provide indirect benefits to individuals in the form of evolutionary success of relatives (Hamiltonian inclusive fitness). Further, individual participation in groups which are able to achieve successful organizational structures may each enjoy their share of added indirect benefits. However, social interactions amongst members in a group have long been understood to expose populations to risk from infectious disease. Since it is natural to assume that infectious disease has provided consistent and substantial selective pressures through evolutionary time, a full understanding of the evolution of social complexity must include some examination of the impact of disease on population structure.
Previous studies have shown that populations displaying only selfish local behaviors, in the absence of disease, could have evolved complex, highly stable social organizations, and that long-term social affiliations such as family bonds can aid in the advancement of social complexity. However, as the amount of social interaction increases in the population, so too does the potential for disease burden. Building on previous models of individually-motivated, self-organizing societies, we examine the trade-off to the social organization between maintaining long-term social interactions and the detriment posed by the increased risk of disease.
Title: Entity Resolution of Terrorist Incidents
Entity resolution is the discovery of objects, or entities, of the same type. In our research we are developing algorithms that address entity resolution by leveraging multiple machine learning approaches including information extraction, Bayesian learning, and algorithms using Higher Order correlated topic modeling. In this paper we present our work on resolving incidents of terrorist attacks from two different data sources: the Global Terrorism Database (www.start.umd.edu/gtd) and the Worldwide Incident Tracking System (wits.nctc.gov). The two datasets vary in size, structure, formats, frequency of events and level of detail. Our goal is to identify similar events from the two datasets, given a small hand-crafted training set developed by the START Center of Excellence.
Our approach involves three phases: (1) coding and standardizing the two datasets; (2) calculating similarities for each pair of incidents; (3) applying a classifier and evaluating its performance using ground truth data. In addition to applying standard coding techniques, we also removed repetitive patterns from summary text fields such as dates, times, etc. Moreover, we used Google's GeoCoding service to obtain real-world location coordinates. In the second phase, we employed several unsupervised algorithms to measure similarity scores between incidents: e.g., normalized numeric distance function for numeric values, Jaro-Winkler for short nominal strings and geographic distance for geospatial coordinates. In addition, we developed a modified version of Latent Dirichlet Allocation (LDA) that incorporates higher order paths (Higher Order Learning) to identify latent topics in incident descriptions. Based on the resulting distribution over latent topics for each incident, we computed a pairwise Kullback-Leibler divergence score. In the third phase, we examined the importance of each of the features using various attribute selection algorithms and retrained using only the most prominent features for classification.
While our development and evaluation of the overall approach is still in progress, we report preliminary results in which more than 90% of the instances were correctly classified with ~70% recall for matching pairs. Given the imbalance in positive (matching incidents) vs. negative (non-matching incidents) in the original training data, in ongoing work we are exploring ways to better sample the negative class.
Title: A new methodology for outbreak detection
Building on earlier CCICADA research, we propose a new methodology for outbreak detection based on Information Theoretic Methods. We assume that syndromes are monitored discretely and are Poisson distributed. Current methods rely on using sequential probability ratio tests with the likelihood as the weight in a cumulative sum. We propose a new strategy based on computing Shannon entropy for a set of unique symbols with a set size measured as the deviation from a well-known historical count. After this entropy computation, we compute a cumulative sum based on entropy of each time observation. The sum is weighted over all observations in a O(n^k) fashion, with 1 < k. As in a probability ratio test, if this sum crosses a threshold we classify the time period as an outbreak. We propose and present some preliminary results for the best weight and amount of history to include in detecting an outbreak.
Title: Human risk perception and HIV spread
This project was begun during the DIMACS REU in collaboration with Dr. Amira Kebir, a DIMACS visiting mathematician, and Dr. Nina Fefferman. Expect to submit paper for publication by May 2011.
We would like to analyze the extent to which human risk estimation affects the spread of HIV through sexual transmission. We have used game theory to build a simulation in which every sexual encounter is a game, and we separate the population into those who get regularly tested and those who rarely get tested for infection. The type of game played depends on the knowledge available to the players. The overall mathematical model becomes a Markov process; the simulation was implemented in MATLAB. We will analyze sensitivity to parameters such as individual utility of sexual contact and risk estimation, as well as frequency of testing for infection.
We are also interested in how the choice of risk perception parameter affects the infection curve. In particular, we want to see whether infection rates increase when the perception of risk decreases, even while actual risk increases. This would reflect the historical pattern of infection when the first antiretroviral drug AZT was introduced. People no longer felt AIDS was as much of a threat because fewer people were visibly symptomatic and transmission increased as a result. This model could provide insight into design of public awareness campaigns, to incorporate the impact of risk estimation on behavior and resulting exposure.
Title: Optimal Monitoring Interval Length in IP Network Anomaly Detection Systems
Anomaly detection systems for Internet Protocol (IP) networks offer the potential to identify new attacks before anomaly signatures are established. To do so, many anomaly detection systems build models of normal user activity from historical data and then use these normal models to identify deviations from normal (i.e., benign) behavior caused by attacks. In this work, we develop a method for computing Optimal Monitoring Interval Length (OMIL) for anomaly detection using time series analysis and protocol graphs. Protocol graphs have been used in anomaly detection and are graph-based representations of logged network traffic. These protocol graphs model the communication relationships between origin-destination (OD) pairs, allowing us to identify malicious traffic targeting potentially vulnerable parties. It is common practice in anomaly detection to perform time series analysis on network statistics in order to identify deviation from normal behavior. This work aims to improve the existing process by solving for an OMIL for time series analysis. Despite the popularity of time series analysis of network statistics, a method for determining optimal interval length does not exist.
The prevailing approach in time series analysis is to choose a monitoring interval length that appeals to human standards (e.g., 60 seconds, 10 minutes) rather than a monitoring interval length that will optimally detect anomalies. The criteria for optimality depend on the network administrator, but can and should be made concrete. For example, the network administrator may wish to maximize the true positive rate subject to a constraint on computation time. In previous anomaly detection work, authors go to lengths to justify the detection methods but not the monitoring interval length (MIL) used, which plays an important role in the process. Additionally, it is not well known that detection methods work equally well independent of the MIL used. The result of this work is a process for determining an OMIL for time series analysis that will allow the network administrator to optimize any one of the relevant objectives, such as error rates, computation cost, or expected detection latency.
Title: An Exact Algorithm for Scheduling Multi-track Conferences
An academic conference may consist of as few as a handful of presentations occupying an afternoon, or as many as a thousand talks spanning an entire week. Each talk, or session of talks, is assigned to a day's particular time slot and track, more of which must happen in parallel tracks as the size of the conference grows. Such scheduling tasks are typically done in an ad hoc fashion, which may be adequate for small conferences or workshops, but is prone to common flaws of such an approach (e.g. multiple similar talks may be unintentionally scheduled in parallel due to dissimilar titles).
We first model a simple form of the above scheduling problem common to large conferences, one with identical tracks and time slots. Such a symmetric problem can be seen as a natural variant of the well known combinatorial optimization problem Minimum K-Partition, which we call the Capacitated Minimum K-Partition Problem (CMKP). This can be defined as follows: given a conference with N sessions, K time slots, at most M tracks allotted in any single time slot, a cost matrix (C_ij) where the ij-th entry is a measure of similarity between session i and j (based on the talks' abstracts), and a set S of pairs of sessions (i,j) which cannot be scheduled in parallel (due to overlapping speakers), find a partition of all N sessions into K parts such that each part is of size at most M, and the sum of C_ij's for each pair (i,j) scheduled together, is minimum.
We shall describe our proposed exact branch-and-cut algorithm based on semidefinite programming for solving CMKP instances. Our algorithm can handle other relevant constraints even in the more general setting where each session is allowed to have arbitrary weight.
This is joint work with Jonathan Eckstein.
Title: When Will It Happen---Relationship Prediction in Heterogeneous Information Networks
Project Scope: Link prediction, i.e., predicting links or interactions between objects in a network, is an important task in network analysis. For example, a DHS related task could be identifying the potential online users or organizations that a target user may contact within a time period, according to their past behaviors encoded in the network.
Existing studies focus on link prediction in homogeneous networks, where all objects belong to the same type and links represent connections between these objects. However, in real world, heterogeneous networks that consist of multi-typed objects and relationships between them are ubiquitous. This brings several challenges. First, link prediction is generalized to relationship prediction, i.e., predicting whether certain relationship will be built between objects of heterogeneous types. Second, the traditional topological features need systematic re-examination to take the heterogeneity of objects and links into consideration. Further, most current studies only concern the problem of whether a link will appear in the future but seldom pay attention to the problem when it will happen. In this project, we study the problem of predicting when certain relationship will happen in the scenario of heterogeneous networks. First, we provide a systematic way to define topological features in heterogeneous networks. Then, we build models for the distribution of the relationship building time given these features. The experiments on several real datasets show the effectiveness of our methodology, compared with the baselines that do not use the heterogeneity of the network.
Relevance to Research Area: This project can not only help to predict whether and when two people will build a specified relationship via information networks, but also help to detect which type of connections contribute to such relationship building.
Publications: Two papers along this line are in preparation for ASONAM'11 and PKDD'11.
Title: Strategies to Deploy Temporary Ambulatory Medical Services in Response of a Catastrophic Event.
In case of a catastrophic event, two major problems arise, traffic congestion to reach a medical facility and the wait time for service. In order to alleviate the congestion, primary emergency medical services can be deployed temporary in strategic locations in such a way that the average wait time is minimized. The local analysis of the queuing system arising at each medical station is collected and the feedback is used to reassign traffic in order to divert patients to a less congested station when the wait time goes beyond a preset tolerance. We propose an algorithm to dynamically assign the best temporary location of primary medical services in function of distance and users density in different location of a metropolitan area. Visual Analytics tools using confluent graph will be used to help display data and take decision on the fly.
The challenges would be to test the algorithm using reel data associated with a specific geographic location.
We will simulate preliminary results using primary Data collected from a study supported in part by the U.S. Department of Homeland Security through a grant awarded to the National Center for study of Preparedness and Critical Event Response at Johns Hopkins University. In this study, data was collected over one year period (October 1, 2005 to September 31, 2006) from a local hospital.
Title: The Evolution Towards Decentralized C2
We examine: (1) The degree to which the United States military is planning to move towards a more decentralized C2 paradigm; (2) the adoption of such a paradigm by adversaries; (3) the degree to which the U.S. is actually making the transition; and (4) the factors enabling and impeding the shift. We find that many adversaries of the west, including terrorist organizations and "hybrid enemies," are already operating in an agile, decentralized manner. Meanwhile, top-level strategic plans of the U.S. Department of Defense are consistent with a transition to net-enabled decentralized C2 for the U.S. military where appropriate, and the shift is supported by stated mission command doctrine. The transition is already occurring to some degree. In Afghanistan, for example, small Marine units operate with significant autonomy and edge-like behavior. The Department of Defense has also made progress in the use of web-enabled collaborative systems. These systems have broadened information distribution and stimulated new interaction patterns, although they have not changed the allocation of decision rights. Technologies enabling the shift to net-enabled decentralized C2 must be coupled with appropriate policies and procedures, and occasionally must overcome mid-level institutional cultural resistance.
Marius S. Vassiliou, The Institute for Defense Analyses
Title: Innovation Patterns in Some Successful C2 Technologies
In a world of rapidly advancing commercial technology, the U.S. military often still struggles to deliver state-of-the art information technologies for C2 to warfighters and commanders. Some recent success stories include the Tactical Ground Reporting (TIGR) system, the Command Post of the Future (CPOF), and the Combined Information Data Network Exchange (CIDNE). These cases can be characterized using a Kline chain-linked model of innovation, with very strong iterative links between R&D and "markets" (military end users in this context). These initiatives also made effective use of available commercial technology, and displayed "edge innovation" by end users. The initiatives identified pressing needs with a minimum of process formalism, and then filled those needs quickly, with dedicated development teams for continual refinement. They often temporarily bypassed normal procurement channels. Initial deployments were often limited, with "at-risk" adoption by commanders, allowing crucial in-theater experimentation and feedback loops in the development process. As the technologies proved useful, deployment expanded. Despite potential problems in interoperability and security, and conflicts with the military bureaucracy, such "Kline-like" innovation shows promise for some C2 technologies.
Title: Evidence-based Trust Propagation Framework
Project Scope: With the advent of online media, more and more consumers, including decision makers, rely on the Internet for news and other information. However, the relative ease of publishing news online has led to a significant impact on the overall quality of news accessible to the consumers. Even traditional, reputed, media sources have frequently been fooled by fraudulent news published online. Finding trustworthy sources and content is both important and challenging task. Existing fact-finding models work on structured data consisting of pre-extracted fields and assume accurate information extraction, but as online data gets more unstructured that is not usually available. The problem of verifying the trustworthiness of information is, for the most part, the information consumer problem, but is important also from the publishers' perspective, as indicated by the problems that came up during the recent Wikileaks crisis.
We propose a novel, content-based trust propagation framework to ascertain veracity of free-text claims and compute trustworthiness of their sources, based on the evidence we find for the claims. The quality of relevant content is incorporated as an indirect supervision for trustworthiness of the source for the claim. The trust scores are then propagated via a graph-based iterative algorithm over the sources, claims, and content nodes. Using a retrieval-based approach to find relevant articles, we instantiate a model to compute the trustworthiness of news sources and articles. We show that the proposed model helps assess trustworthiness of sources better and that ranking news articles based on trustworthiness learned from the proposed content-driven model is significantly better than baselines that ignore either the content quality or the trust framework. We found that for certain news genres, different news sources (including certain online blogs) tend to be more trustworthy than other, more established, media organizations.
Recent Progress: We proposed a trust propagation framework to compute trustworthiness of news articles and their sources. The framework generalizes over other domains as well; in fact, we show that previous studies on trustworthiness on structured data can be modeled as special cases of the proposed framework. We plan to continue exploring use of online content in predicting trustworthiness in other domains, including healthcare. Current research focuses on building a "fact finder"-like application for validating a claim based on retrieving textual evidence from news articles and blog posts.
Publications: Dan Roth, Mark Sammons, and V.G.Vinod Vydiswaran. "A Framework for Entailed Relation Recognition", ACL 2009.
Title: Shape-free detection of hazard materials and its application to counter-terrorism
Radon transform is the cornerstone of the modern image processing. Using Radon transform we can detect the location and the shape of the hidden objects. This algorithm has been built in almost all medical imaging programs. It takes advantage of the difference of contrasts between the objects and the circumstance. However it doesn't provide any information about the physical parameters (such as density, dielectric constant, etc.) of the objects. For the diagnosis purpose in medical imaging it is useful and powerful. But for the counterterrorism purpose it doesn't work in some cases because the hazard materials can be hidden in any shape.
We revisited the Radon transform and found that Radon transform has the ability to detect the physical parameters of materials if we modify the mathematical model correspondingly. We assume that some hazard materials can be distinguished by few physically measurable parameters. Using a known reference material and regular Radon transform we are able to design a new algorithm that can be used to detect the physical parameters of hidden objects.
We expect to develop a new device based on the above-mentioned algorithm and use it to detect the hazard materials carried by the terrorists no matter where and in what shapes the materials have been hidden.
V.G.Vinod Vydiswaran, University of Illinois at
Urbana-Champaign and Dan Roth, PI,
Command Control and Interoperability Center for Advanced Data Analysis (CCICADA)
Mark Sammons, V.G.Vinod Vydiswaran, Dan Roth, et.al. "Relation Alignment for Textual Entailment Recognition", TAC 2009.
Jeff Pasternack and Dan Roth. "Knowing what to believe (when you already know something)", COLING 2010.
V.G.Vinod Vydiswaran, ChengXiang Zhai, and Dan Roth. "Content-based Trust Propagation Framework", under review.
V.G.Vinod Vydiswaran, ChengXiang Zhai, and Dan Roth. "Gauging the Internet Doctor: Ranking Medical Facts based on Community Knowledge", under review.
Guoping Zhang, Morgan State University
Previous: Program
Document last modified on March 29, 2011.