Difference between revisions of "Data quality tool"

From lensowiki
Jump to: navigation, search
(updates reflecting discussions with seb and samitha)
(note why we're not doing travel-time estimates)
 
Line 74: Line 74:
 
For comparing travel time feeds, the system should provide metrics which are applicable to distributions of values instead of individual values. For example,
 
For comparing travel time feeds, the system should provide metrics which are applicable to distributions of values instead of individual values. For example,
 
*The Wasserstein metric (comparison of distributions) (estimated travel time distribution vs. collected samples)
 
*The Wasserstein metric (comparison of distributions) (estimated travel time distribution vs. collected samples)
 +
Since for the Caltrans study, we will mostly be looking at point-speed raw data and a point-speed model (namely, the highway model), introducing travel-time output filter for this task for data quality assessment would be an unnecessary additional task requiring additional time to be spent on something which is not even part of the deliverables for the initial use case for this system.
  
 
===Link-based feeds===
 
===Link-based feeds===

Latest revision as of 01:46, 19 November 2010

With a commoditization of hardware and associated decreasing infrastructure costs, the number of ways in which the data needed to drive traffic monitoring has increased significantly. However, most traffic measurement methods have drawbacks related to the technology which they use. For example, data collection via smartphones has extremely low infrastructure costs and the ability to provide extremely precise and granular data but suffer from extremely low penetration. Inductive loop detectors, on the other hand, suffer from being point-based measuring tools which cannot always be used for time travel estimates, can carry high maintenance costs, and are not always accurate; at the same time, they have extremely high penetration rates (in locations where they are installed).

This diversity and difference in strengths and weaknesses makes it evident that it is necessary, at least at this point in time, to be able to combine different data sources together into a single "traffic picture" whose data is of higher quality than that of any individual source. A key component of such a system is the ability to evaluate the quality of both the individual data streams as well as their combinations to determine what improvements, if any, exist in the new, combined system, as well as whether certain parameters or combination provide better results. The goal of this project is to devise such a data quality assessment system so that these evaluations and analyses can be made.

General system description

The system, as currently envisioned, will be a web-based portal which will allow users to evaluate the quality of various data feeds through any modern, standards-compliant browser with an internet connection to CCIT servers. The interface will be primarily visual, allowing users to compare a number of metrics (described in more detail below) visually as well as numerically. At the current time, a comparison of at least one and at most two data sources will be possible, though the system will be designed in such a way that the latter restriction will not be permanent and could be lifted in the future.

In the spirit of modern and user-friendly web design paradigms, the system should be responsive and visually appealing. Exporting generated graphs for sharing via email or other means should be easy and not painful. The tool should be useful "as is" or "out of the box" but still allow a useful amount of customization to allow users to tailor it to their specific needs or application.

More details on the user interface and program layout will be specified in a separate document. The system will be targeted primarily at CCIT researchers familiar with the different data feeds and models which are available. However, the user interface will be designed with less technical users in mind, so that (for example) it can be demoed live to transportation professionals.

Note that in general, the CCIT system contains two types of data feeds: link-based and point-based. In the short term, the system will be designed with the latter group of feeds in mind. However, it should be architected in such a way that the addition of link-based feeds should be possible without much additional effort.

Distinction between DAT and FeedGenerator

Note that the system described here can be called the "Data Analysis Tool," or DAT for short. Its only purpose is to accept any number of feeds as inputs and provide an interface to compare the data that these feeds contain directly. Its purpose is not to take two vastly different feeds and generate the necessary modifications to make them compatible with each other for comparison. Such functionality is beyond the scope of the work for the DAT component specifically, even though it will most likely be necessary for DAT itself to work properly.

As a result, a required component of the data quality assessment framework (but not DAT itself) is the FeedGenerator module. This component is responsible for creating the necessary output filters to the feeds which already exist in the system so that different feeds can be compared correctly in the DAT. The FeedGenerator module will be responsible for making sure that consistent metadata exists across different feeds to enable comparisons to take place. The minimal metadata requirements for a feed to be usable by DAT can be broken up into feed-level and datapoint-level requirements.

Feed-level requirements
  • For processed feeds
    • Feed processing sequence – a description of the modifications that have been made to the input data to arrive at the data currently produced by the feed
    • Inputs used for feed
    • Model parameters
    • Typical input-output error profile
    • Generic feed type (model-based, statistical, historical, real-time, or some combination of these)
  • For raw feeds
    • Sensor type – if the feed contains readings from a single sensor type
Additionally, raw feeds may specify characteristics of the sensor networks, if applicable. This, however, is not required.
Datapoint-level requirements
  • Recorded time
  • Received time
  • Sensor ID
  • Sensor location
  • Sensor type [for multi-sensor feeds]
  • GPS device error [if applicable]
  • Measured value (speed/count/etc)

Users who want to compare two incompatible feeds will be provided with an interface to the FeedGenerator so that they can submit a request to the system administrator to create an output filter for the feed(s) in question so that a proper output filter will be created. A direct "instant filter creation" mechanism will not be present in the system, at least initially, due to the management complexities that would be introduced by such a system.

Evaluation metrics

The above-described system is largely useless without a description of the metrics which it will display for analysis. It is implied, of course, that this list of metrics is not all-inclusive, and the system should be designed in such a fashion that the addition of new metrics should be trivial beyond the coding of the metric calculation itself. As per the earlier mention that point-based-data evaluations are the first priority, the metrics below are only applicable to such sources.

The data quality assessment tool should provide an easy-to-use interface to specify "correct" or benchmark values for each of the metrics, as the tolerable amount of, for example, GPS error depends on the specific application for which the data is being considered. Also, any feed available to the system should be usable as a benchmark for any metric (as long as it has the data to calculate it).

All metrics, whether direct or calculated, will generally be generated on-the-fly for each request. As such, the system does not require the use of a database for storing calculated data. If system usage is high enough that database load or performance become a problem, we should look into using memcached for storing the calculation results. Since the results are transient, storing them permanently in a database does not make much sense.

The system will not allow users to flag specific, problematic data points or feeds initially due to the added overhead of having to store this data in a manageable and useful fashion. If users want to, in effect, create a new output filter based on their analyses, the tool will provide a textual summary of the user's current filtering algorithm so that a corresponding output filter may be created by the system administrator.

Data-level metrics

  • Distribution of GPS errors (as reported by the recording device)
  • Distribution of map-matching errors (as determined by MM mapmatching algorithms)
  • Data transmission delay (time difference between data recording and data storage on server; 2-step delay for TeleNav only: device→TeleNav server→CCIT server)
  • Sampling rate †
  • Space coverage †
  • Time coverage †
  • Penetration rate †
  • Distribution of measured values

–––––
† – at this time, it is not entirely clear if this data will be generated on-the-fly by the DAT or by the FeedGenerator (see above for the distinction). This separation should become evident during implementation.

All of these metrics should be filterable by time intervals, feed, device model (if available), location (specified as a network, polygon, or set of specific links), and unique device. This would allow for the analysis of derived metrics such as "density of data per unique device" or "distribution of point location from link end for the city of SF."

Application-level comparison

This functionality will allow the users to directly compare the output of some model when different combinations of input feeds are used with it. At this time, the comparison between outputs will be purely visual, though specific metrics or analytical/numerical comparison methodologies can be added at a later time.

The system will be designed as follows: for a chosen model (i.e. the highway model), the user will be able specify which input feeds should be used and for what time period the model will be run. The user will provide a contact email address as well. When the user submits the request, their task will be placed into a queue of model runs. When the model finishes computation, the user will be sent an email containing a link at which they may view the model's results. These results will be viewable to other users as well. As a result, if the user wants to compare the output of the highway model on two different sets of input feeds, he may either:

  • Find two existing output sets and compare their outputs
  • Take an existing output set, request the generation of a second output set, and then compare the two sets once generation of the second set is complete
  • Make two requests for the generation of two new output sets and then compare them once the computation of both is complete

Since this will allow for the easy generation of an arbitrary number of output sets, these generated output sets will be automatically deleted after a set number of days, initially 7.

Future directions

For the future, the system should be able to accommodate these additional feed types.

Travel time distributions

For comparing travel time feeds, the system should provide metrics which are applicable to distributions of values instead of individual values. For example,

  • The Wasserstein metric (comparison of distributions) (estimated travel time distribution vs. collected samples)

Since for the Caltrans study, we will mostly be looking at point-speed raw data and a point-speed model (namely, the highway model), introducing travel-time output filter for this task for data quality assessment would be an unnecessary additional task requiring additional time to be spent on something which is not even part of the deliverables for the initial use case for this system.

Link-based feeds

  • Density of data (total number of points per link)
  • Frequency of new data receipt (total per link)
  • Distribution of point location distance from link end (for city locations with traffic lights, provides ability to flag when most data points are not on the ends, since people should be waiting at lights)