Modelling Crash Frequency and Severity Using Global Positioning System Travel Data


Book Description

"Improving road safety requires accurate network screening methods to identify and prioritize sites to maximize effectiveness of implemented countermeasures. In screening, hotspots are commonly identified using statistical models based on historical crash data. However, collision databases are subject to errors and omissions and crash-based methods are reactive. With the arrival of Global Positioning System (GPS) trajectory data, surrogate safety methods, proactive by nature, have gained popularity. Although GPS-enabled smartphones can collect reliable and spatio-temporally rich driving data from regular drivers using an inexpensive, simple, and user-friendly tool, few studies to date have analyzed large volumes of smartphone GPS data and considered surrogate-safety modelling techniques for network screening. The main objective of this thesis is to propose and validate a GPS-based network screening modeling framework dependent on surrogate safety measures (SSMs). First, methods for collecting and processing GPS and associated data sources are presented. Data, collected in Quebec City and capturing 4000 drivers and 21,000 trips, was processed using map matching and speed filtering algorithms. Spatio-temporal congestion measures were proposed and extracted and techniques for visualizing congestion patterns at aggregate and disaggregate levels were explored. Results showed that each peak period has an onset period and dissipation period lasting one hour. Congestion in the evening is greater and more dispersed than in the morning. Congestion on motorways, arterials, and collectors is most variable during peak periods. Second, various event-based and traffic flow SSMs are proposed and correlated with historical collision frequency and severity using Spearman's correlation coefficient and pairwise Kolmogorov-Smirnov tests, respectively. For example, hard braking (HBEs) and accelerating events (HAEs) were positively correlated with crash frequency, though correlations were much stronger at intersections than at links. Higher numbers of these vehicle manoeuvres were also related to increased collision severity. Considered traffic flow SSMs included congestion index (CI), average speed (V̄), and coefficient of variation of speed (CVS). CI was positively correlated with crash frequency and showed a non-monotonous relationship with severity. V̄ was negatively correlated with crash frequency and had no conclusive statistical relationship with crash severity. CVS was positively related to increased crash frequency and severity. Third, a mixed-multivariate model was developed to predict crash frequency and severity incorporating GPS-derived SSMs as predictive variables. The outcome is estimated using two models; a crash frequency model using a Full Bayes approach and estimated using the Integrated Nested Laplace Approximation (INLA) approach and a crash severity model integrated through a fractional Multinomial Logit model. The results are combined to generate posterior expected crash frequency at each severity level and rank sites based on crash cost. Negative Binomial models outperformed alternative models based on a sample of the network, and including spatial effects showed improvement in model fit. This crash frequency model was shown to be accurate at the network scale, with the majority of proposed SSMs statistically significant at 95 % confidence. In the crash severity model, fewer variables were significant, yet the effect of all significant variables was consistent with previous results. Correlations between rankings predicted by the model and by the crash data were adequate for intersections (0.46) but were poorer for links (0.25). The inclusion of severity, which is an independent dimension of safety, is a substantial improvement over many existing studies, and the ability to prioritize sites based on GPS data and SSMs rather than historical crash data represents a substantial contribution to the field of road safety." --




Crash Severity Modeling in Transportation Systems


Book Description

Modeling crash severity is an important component of reasoning about the issues that may affect highway safety. A better understanding of the factors underlying crash severity can be used to reduce the degree of crash severity injury, locate road hazardous sites, and adopt suitable countermeasures. In order to provide insights on the mechanism and behavior of the crash severity injury, a variety of statistical approaches have been utilized to model the relationship between crash severity and potential risk factors. Many of the traditional approaches for analyzing crash severity are limited in that they are based on the assumption that all observations are independent of each other. However, given the reality of vehicle movement in networked systems, the assumption of independence of crash incidence is not likely valid. For instance, spatial and temporal autocorrelations are important sources of dependency among observations that may bias estimates if not considered in the modeling process. Moreover, there are other aspects of vehicular travel that may influence crash severity that have not been explored in traditional analysis approaches. One such aspect is the roadway visibility that is available to a driver at a given time that can impact their ability to react to changing traffic conditions, a characteristics known as sight distance. Accounting for characteristics such as sight distance in crash severity modeling involve moving beyond statistical analysis and modeling the complex geospatial relationships between the driver and the surrounding landscape. To address these limitations of traditional approaches to crash severity modeling, this dissertation first details a framework for detecting temporal and spatial autocorrelation in crash data. An approach for evaluating the sight distance available to drivers along roadways is then proposed. Finally, a crash severity model is developed based upon a multinomial logistic regression approach that incorporates the available sight distance and spatial autocorrelation as potential risk factors, in addition to a wide range of other factors related to road geometry, traffic volume, driver's behavior, environment, and vehicles. To demonstrate the characteristics of the proposed model, an analysis of vehicular crashes (years 2013-2015) along the I-70 corridor in the state of Missouri (MO) and on roadways in Boone County MO is conducted. To assess existing stopping sight distance and decision sight distance on multilane highways, a geographic information system (GIS)-based viewshed analysis is developed to identify the locations that do not conform to AASHTO (2011) criteria regarding stopping and decision sight distances, which could then be used as potential risk factors in crash prediction. Moreover, this method provides a new technique for estimating passing sight distance along two-lane highways, and locating the passing zones and no-passing zones. In order to detect the existence of temporal autocorrelation and whether it's significant in crash data, this dissertation employs the Durbin-Watson (DW) test, the Breusch-Godfrey (LM) test, and the Ljung-Box Q (LBQ) test, and then describes the removal of any significant amount of temporal autocorrelation from crash data using the differencing procedure, and the Cochrane-Orcutt method. To assess whether vehicle crashes are spatially clustered, dispersed, or random, the Moran's I and Getis-Ord Gi* statistics are used as measures of spatial autocorrelation among vehicle incidents. To incorporate spatial autocorrelation in crash severity modeling, the use of the Gi* statistic as a potential risk factor is also explored. The results provide firm evidence on the importance of accounting for spatial and temporal autocorrelation, and sight distance in modeling traffic crash data.




Analyzing Crash Frequency and Severity Data Using Novel Techniques


Book Description

Providing safe travel from one point to another is the main objective of any public transportation agency. The recent publication of the Highway Safety Manual (HSM) has resulted in an increasing emphasis on the safety performance of specific roadway facilities. The HSM provides tools such as crash prediction models that can be used to make informed decisions. The manual is a good starting point for transportation agencies interested in improving roadway safety in their states. However, the models published in the manual need calibration to account for the local driver behavior and jurisdictional changes. The method provided in the HSM for calibrating crash prediction models is not scientific and has been proved inefficient by several studies. To overcome this limitation this study proposes two alternatives. Firstly, a new method is proposed for calibrating the crash prediction models using negative binomial regression. Secondly, this study investigates new forms of state-specific Safety Performance Function SPFs using negative binomial techniques. The HSM's 1st edition provides a multiplier applied to the univariate crash prediction models to estimate the expected number of crashes for different crash severities. It does not consider the distinct effect unobserved heterogeneity might have on crash severities. To address this limitation, this study developed a multivariate extension of the Conway Maxwell Poisson distribution for predicting crashes. This study gives the statistical properties and the parameter estimation algorithm for the distribution. The last part of this dissertation extends the use of Highway Safety Manual by developing a multivariate crash prediction model for the bridge section of the roads. The study then compares the performance of the newly proposed multivariate Conway Maxwell Poisson (MVCMP) model with the multivariate Poisson Lognormal, univariate Conway Maxwell Poisson (UCMP) and univariate Poisson Lognormal model for different crash severities. This example will help transportation researchers in applying the model correctly.




Feasibility of Applying the Global Positioning System to Locate Motor Vehicle Crashes


Book Description

Countermeasures for motor vehicle crashes are often determined after extensive data analysis of the crash history of a roadway segment. An important factor that drives the value of this analysis is the accuracy, or precision, with which the crash is located. Yet this location is only as accurate as the estimate of the police officer. In light of this, many have suggested that global positioning system (GPS) technology has the potential to increase data accuracy and decrease the time spent recording crash location data. Over 10 months, the locations of 34 crashes were determined using both the conventional method and a hand-held GPS receiver. The two methods were compared in terms of timeliness and precision. The benefits of any improved precision using the GPS were assessed through querying crash data analysts at the local level as to how the improved precision affected their consideration of potential crash countermeasures for five crashes selected from the sample. At the scene of the crash, the use of GPS receivers added up to an average of 10 extra minutes per crash, depending on how crash location was defined. There was an average disparity of 130 ft (39 m) between the location as determined with the GPS and conventional methods, presuming the GPS precision given in the literature is within 7 ft (2 m). However, although both the literature and survey responses revealed that greater precision will affect evaluation of crash countermeasures in some instances, many of the errors cited in conventional crash location methods arise from human error rather than precision. The authors provide recommendations for defining crash location uniformly, limitations of the methodology employed in this effort, and the types of countermeasures that may or may not benefit from improved precision.




Relationship Between Speed Metrics and Crash Frequency and Severity


Book Description

Reducing the number and severity of crashes on highways and streets is of high importance for government officials and transportation professionals in the United States. Substantial research has focused on various speed metrics, such as operating speeds and the posted speed limit, and their relationship to safety, such as crash frequency and crash severity. Crash severity is the safety measure most often linked to measures of speed and is based on dissipation of kinetic energy. However, many aspects of the relationships between speed metrics and crash frequency and risk have yet to be studied in depth, so a complete understanding of speeding-related crashes is unknown. Design speeds are used to establish geometric design criteria, and operating speed results from the geometric design process. Posted speed limits may be established based on operating speeds or by statute. When posted speed limits are inconsistent with design or operating speeds, road safety performance may be affected. A more complete understanding of the relationship between safety performance and operating speeds, posted speed limits, and design speeds may produce rational speed limits and lead to improved safety performance on roadways.This research combined real-time vehicle probe speed data, roadway inventory data, and crash data to assess crash risk and crash frequency.This thesis first determined the risk of a crash on two-lane rural highways based on operating speed metrics, differences between speed metrics, and traffic volume data. Results from the crash risk analysis indicate that operating speeds in 1-minute and 5-minute averages improve the statistical fit and prediction of binary logistic regression models. Higher traffic volumes and operating speeds higher than either the road average speed or road reference speed were associated with increased crash risk. Whereas, variations in travel speeds between vehicles were associated with decreased crash risk. This thesis also analyzed the frequency of crashes on horizontal curve segments of two-lane rural roadways using operating speed data, differences among speed metrics, traffic volume data, roadway inventory data, and crash data. Negative binomial regression models improve the statistical fit and prediction of crash frequency models compared to random-effects negative binomial regression. Generally, increases in the differences between operating speed and road average speed and the differences between operating speed and inferred design were associated with an increase in crash frequency. Increases in the differences between inferred design speed and posted speed limit were also associated with an expected increase in crash frequency; however, increases in the operating speed variance and in the difference between operating speeds and posted speed limit were associated with an expected decrease in crash frequency.




A Novel Approach to Modeling and Predicting Crash Frequency at Rural Intersections by Crash Type and Injury Severity Level


Book Description

Safety at intersections is of significant interest to transportation professionals due to the large number of possible conflicts that occur at those locations. In particular, rural intersections have been recognized as one of the most hazardous locations on roads. However, most models of crash frequency at rural intersections, and road segments in general, do not differentiate between crash type (such as angle, rear-end or sideswipe) and injury severity (such as fatal injury, non-fatal injury, possible injury or property damage only). Thus, there is a need to be able to identify the differential impacts of intersection-specific and other variables on crash types and severity levels. This thesis builds upon the work of Bhat et al., (2013b) to formulate and apply a novel approach for the joint modeling of crash frequency and combinations of crash type and injury severity. The proposed framework explicitly links a count data model (to model crash frequency) with a discrete choice model (to model combinations of crash type and injury severity), and uses a multinomial probit kernel for the discrete choice model and introduces unobserved heterogeneity in both the crash frequency model and the discrete choice model, while also accommodates excess of zeros. The results show that the type of traffic control and the number of entering roads are the most important determinants of crash counts and crash type/injury severity, and the results from our analysis underscore the value of our proposed model for data fit purposes as well as to accurately estimate variable effects.




Proceedings of the Second International Conference on Intelligent Transportation


Book Description

These proceedings present the latest information on intelligent- transportation technologies and their applications in real-world cases. The Second International Conference on Intelligent Transportation was held in Chengdu, China on November 25–27, 2015, to present the latest research in the field, including intelligent-transportation management, intelligent vehicles, rail transportation systems, traffic transportation networks, as well as road traffic element simulations and their industrial development. The aim of conference was to bring together academics, researchers, engineers and students from across the world to discuss state-of-the-art technologies related to intelligent transportation.




Advanced Statistical Modeling of the Frequency and Severity of Traffic Crashes on Rural Highways


Book Description

The primary objective of practitioners working on traffic safety is to reduce the number and severity of crashes. The Highway Safety Manual (HSM) provides practitioners with analytical tools and techniques to estimate the expected crash frequency and severity with the aim to identify and evaluate safety countermeasures. Expected crash frequency can be estimated using Safety Performance Functions (SPFs) provided in Part C of the HSM. The HSM provides simple SPFs which are developed using the most frequently used crash counts model, the negative binomial regression model. The rural nature of Wyoming highways coupled with the mountainous terrain (i.e., challenging roadway geometry) make the HSM basic SPFs unsuitable to determine crash contributing factors for Wyoming conditions. In this regard, the objective of this study is to implement advanced statistical methods such as the different functional forms of Negative Binomial, and Bayesian approach, to develop crash prediction models, investigate crash contributing factors, and determine the impact of safety countermeasures. Bayesian statistics in combination with the power of Markov Chain Monte Carlo (MCMC) sampling techniques provide frameworks to model small sample datasets and complex models at the same time, where the traditional Maximum Likelihood Estimation (MLE) based methods tend to fail. As such, a novel No-U-Turn Sampler for Hamiltonian Monte Carlo (NUTS HMC) sampling technique in a Bayesian framework was utilized to investigate the crash frequency, injury severity of crashes on the interstate freeways and some rural highways in Wyoming. The Poisson and the Negative Binomial (NB) models are the most commonly used regression models in traffic safety analysis. The advantage of the NB model can be further enhanced by providing different functional forms of the variance and the dispersion structure. The NB-2 is the most common form of the NB model, typically used in developing safety performance functions (SPFs) largely due to the mean-variance quadratic relationship. However, studies in the literature have shown that the mean-variance relationship could be unrestrained. Another introduced formulation of the NB model is NB-1, which assumes that there is a constant ratio linking the mean and the variance of the crash frequencies. A more general type of the NB model is the NB-P model, which does not constrain the mean-variance relationship. Thus, leveraging the power of this unrestrained mean-variance relationship, more accurate safety models could be developed, and these would lead to more accurate estimation of crash risk and benefits of potential solutions. This study will help practitioners to implement advanced methodologies to solve traffic safety problems of rural highways that have plagued the researchers for a long time now. The methodologies proposed in this study will help practitioners to replace the outdated and inefficient traditional models and obtain more accurate traffic safety models to predict crashes and the resulting crash injury severity. Moreover, this research quantified the safety effectiveness of some unique countermeasures on rural highways.




Crash Data Collection and Analysis System


Book Description

Seeking to identify how Arizona Department of Transportation (ADOT) could accomplish the greatest service improvements with the most efficient use of funds, ADOT engaged ARCADIS to perform a Crash Data Collection and Analysis study and examine the possibilities offered by technological innovations such as Electronic Data Entry (EDE), Relational Database Management Systems (RDBMS), and Geographic Information Systems (GIS). The study resulted in a comprehensive report with three components: an examination of best practices in use in the United States today, a use case and gap analysis examining ADOT's current data work, and a technical memorandum outlining how changes could be implemented.




Highway and Traffic Safety


Book Description

Transportation Research Record contains the following papers: Method for identifying factors contributing to driver-injury severity in traffic crashes (Chen, WH and Jovanis, PP); Crash- and injury-outcome multipliers (Kim, K); Guidelines for identification of hazardous highway curves (Persaud, B, Retting, RA and Lyon, C); Tools to identify safety issues for a corridor safety-improvement program (Breyer, JP); Prediction of risk of wet-pavement accidents : fuzzy logic model (Xiao, J, Kulakowski, BT and El-Gindy, M); Analysis of accident-reduction factors on California state highways (Hanley, KE, Gibby, AR and Ferrara, T); Injury effects of rollovers and events sequence in single-vehicle crashes (Krull, KA, Khattack, AJ and Council, FM); Analytical modeling of driver-guidance schemes with flow variability considerations (Kaysi, I and Ail, NH); Evaluating the effectiveness of Norway's speak out! road safety campaign : The logic of causal inference in road safety evaluation studies (Elvik, R); Effect of speed, flow, and geometric characteristics on crash frequency for two-lane highways (Garber, NJ and Ehrhart, AA); Development of a relational accident database management system for Mexican federal roads (Mendoza, A, Uribe, A, Gil, GZ and Mayoral, E); Estimating traffic accident rates while accounting for traffic-volume estimation error : a Gibbs sampling approach (Davis, GA); Accident prediction models with and without trend : application of the generalized estimating equations procedure (Lord, D and Persaud, BN); Examination of methods that adjust observed traffic volumes on a network (Kikuchi, S, Miljkovic, D and van Zuylen, HJ); Day-to-day travel-time trends and travel-time prediction form loop-detector data (Kwon, JK, Coifman, B and Bickel, P); Heuristic vehicle classification using inductive signatures on freeways (Sun, C and Ritchie, SG).