SWAROOP DINAKAR| HUMAN FACTORS

Swaroop is a Managing Scientist at the famous Driver Research Institute, where he wears many hats including research, education, software development, and analysis of motor vehicle collisions.

He holds a bachelor’s in industrial engineering from Die Anon Saggar College of Engineering in India and a Master’s degree in industrial engineering from Texas A&M University, where he focused on human factors.

Swaroop has published over 20 papers in the field of human factors and is a member of several driver safety related committees and technical groups.

We covered a wide variety of human factors topics, including:

  • The science of evaluating a driver’s response

  • Pedestrian detection and safety

  • Fatality trends and insights

  • The SHRP2 data gold mine (Strategic Highway Research Program)

  • Naturalistic braking data

  • Documenting nighttime scenes

  • And more…

Two clarifications from the show:

  • Biomotion is referred to as “biometric” on a few occasions.

  • Fatal pedestrian crashes with intoxicated crash stats: Drivers >0.08 is ~18% and Peds >0.08 is ~30%. Overall ~43% where either one was >0.08.

You can also find an audio only version on your favorite podcast platform.

A rough transcript can be found below.


Links from the Show:


Timeline of Topics:

00:00:00 - Team Growth and Research Methodology Challenges

00:02:44 - Unbiased Human Factors Analysis and Study Compilation

00:12:05 - Perception-Response Time Variables, Adjustments, and IDRR Development

00:19:21 - Journey into Human Factors Engineering

00:22:16 - Rising Fatality Trends and Emergency Vehicle Distractions

00:30:40 - Haptic Feedback, ADAS Warnings, and System Limitations

00:35:45 - Roles at Driver Research Institute and Response Software Overview

00:47:06 - Detailed Fatality Trends and ADAS Effectiveness

00:49:09 - Pedestrian Safety: Nighttime Visibility and Contributing Factors

00:57:03 - Overestimation of Visibility and Retroreflective Clothing Studies

01:09:36 - Tennis Ball Visibility Study and Bicyclist/Pedestrian Enhancements

01:26:56 - Pedestrian Walking Speeds, Adjustments, and Response Integration

01:32:26 - Case Handling: Site Inspections, Tools, and Analysis Process

01:43:44 - SHRP2 Naturalistic Data Introduction and Braking Insights

02:18:09 - Key Takeaways from SHRP2 Research on Crash Types and Responses

02:34:40 - ADAS Challenges: Distractions, Takeover, and Human-Machine Interaction

02:44:49 - Looming Effects, Lead Vehicle Detection, and Specific Influences

03:04:35 - Expert Testimony Advice and Demonstratives

03:08:12 - Turning Crash Analysis into Broader Safety Improvements


Rough Transcript:
Please find a rough transcript of the show below. This transcript has not been thoroughly reviewed or edited, so some errors may be present.

Lou (00:00:00):

Hey everyone. Welcome to another episode of the Data Driven Podcast. My guest today is Human Factors expert Swaroop Dinakar Swaroop is a managing scientist at the famous Driver Research Institute where he wears many hats including research, education, software development, and analysis of motor vehicle collisions. He holds a bachelor's in industrial engineering from Diane on Cigar College of Engineering in India, and a master's degree in industrial engineering from Texas A&M University where he focused on human factors. Swaroop has published over 20 papers in the field of human factors and is a member of several safety committees and technical groups. We covered a wide variety of human factors topics in this conversation, including the science of evaluating a driver's response, pedestrian detection and safety, fatality trends and related insights. The SHARP2 data gold mine, that's the Strategic Highway Research Program, naturalistic breaking data, documenting nighttime scenes and more. So without further ado, enjoy the show. This wasn't even on my notes, so we'll see how close I stick to my actual notes. But you guys, your team is growing. You're having a good time.

Swaroop (00:01:23):

Yeah.

Lou (00:01:25):

You got you. Jeff, Lynn, Sonny, Tim, right? Tim?

Swaroop (00:01:30):

Yeah.

Lou (00:01:31):

Tim is the latest member to join.

Swaroop (00:01:32):

He is, yeah. We worked with Tim for the last 10 years, so every conference Tim was there is still working on the force, but he was just interested in research and he was able to get a master's retired and got a masters in human factors and started working with us, which was perfect.

Lou (00:01:55):

Is his master's from UMass Amherst or?

Swaroop (00:01:58):

No, it's not. I forget the university, but he did it while he was on the job, so he attended evening school. That was the first thing Jeff told him, if you want work with us, go get a master's. And he did. And we were like,

Lou (00:02:12):

Okay,

Swaroop (00:02:12):

That's

Lou (00:02:12):

Impressive. You jumped that hurdle. You are definitely the kind of person we're looking for and somebody who's interested in research and willing to do the work, it's easy to romanticize the research, but from being boots on the ground that it's a lot of work. It can be tedious

Swaroop (00:02:29):

And that's one thing we struggled with a couple of hires where you don't get enough real world experience or there's a little struggle in applying the research to real world. And when you start bridging that gap, you need someone like Tim who's seen the crashes, who's been at the scenes, seen the evidence that really fills in the gaps for how do I apply this? Because sometimes there can be a disconnect between what you do in research in a lab and you're writing publishing, and then you look at real world and things are not very ideal.

Lou (00:03:08):

Yeah, it's kind of messy and you have to make some great decisions

(00:03:12):

And defend them and be prepared to defend them. And that seems to be where a huge portion of my efforts get dedicated to, which is like I can probably tell you some real basic things about the crash within a couple hours that answer probably most of your questions, but if you want it to be defensible and for me to cover up the gray spots, not cover up, cover the gray spots, then I'm going to need to dedicate a lot more time. True. Yeah, there's a great quote, I've probably said it on this podcast before, so if people have heard it, forgive me, but it's like you're 95% done, congratulations. You're halfway there. And that's how I feel recon is and when the human factor stuff, I bet you it's even more exaggerated because it's not as concrete as recon

Swaroop (00:04:01):

It is. And when we talk about human factors, there's so much research out there and a lot of the research that's being done, they're not thinking about UNI. They're not thinking about how is this going to go in court or how is it going to sound? It's all done towards improving safety and so that we have fewer crashes, fewer pedestrians getting hit, fewer cars getting in crashes, and their main goal is to figure out what's really going on in a car. So when they are doing the research, they're simply looking at what's a driver doing? What's the driver going to do when we start adding technology, what's the driver going to do when we give them these different challenges that they're more likely to encounter? And when you do that, you can really apply the research to your cases because it's unbiased. You didn't go in with the motivation that I'm going to have this research study because it's going to address this crash.

(00:05:04):

We apply what the research says to what the crash was because we are human and it's very easy to be biased. And so when you take the bias out, just rely on what the research says, stop limiting what your personal opinion is, but look at what your average driver is doing on the roadway. That's really what matters. And simply it's always been our methodology to what did this driver do? What does everybody else do in the roadway? And if they're similar, there's not much you can blame them for, but if they're doing something which you're attentive driver on the roadway is doing, then yeah, there is a depreciative effect of be it a cell phone, be it fatigue, be it intoxication, all of those are outliers which really push you outside the range of normal.

Lou (00:06:02):

Yeah, and one of the cool things I think about the methodology that you guys are implementing across, I mean your consulting work, I'm sure I've never really seen any of the consulting work even though I've known you guys for, I've known Jeff very long time now, almost 20 years, and I've known you for, I was looking, you and I published a paper together in 2017. We did. So we've known each other for quite some time. I haven't seen your work, but I know your guys really well. I know your research philosophy, I know your consulting philosophy, and one of the really cool things that you guys do that I respect so much is you really are compiling an unbelievable amount of literature and bringing it together. So when you approach a case and you are looking at say, this might not be an appropriate example, but the lateral acceleration of a driver making a lane change, you're not going to be like, well, I found one study, so we're good to go and that's what we used here. You're going to be like, well, we have 28 and here is what factors influence the driver's lateral acceleration in a certain situation. And when we look at that mathematically, or maybe sometimes you have to actually compile it manually, here's what the range suggests and here's how it compares to what the driver did, therefore x, Y, or Z, you're not just pulling that one paper because one paper can probably fool you. Right?

Swaroop (00:07:18):

True. That's our main goal is to find as much research as you can on a topic because there's two main things here. Number one is a good research study has to be repeatable. If you take the data I've got, or if you run a similar study as I did and you come to the same conclusions, that's probably the best kind of a study because what you are finding is a pattern, a trend in what drivers have done and it simply reinforces what a driver did. Now, if you go off of one single sample, we don't know what the variables in the study were. They might be the smallest of choices a researcher might have made which might influence the outcome of a study.

Lou (00:08:09):

Some of that stuff is under the hood. Even as a peer reviewer, you might not see it unless you're really experienced and you've done it yourself. You might see it. But sometimes I'm presented with studies where I'm asked to do a peer review and I'm like, I can tell you the general validity of this, but did I actually determine if their standard deviation was calculated appropriately or if they did something in the experimental design that had an effect that is kind of behind the scenes and not presented in the paper can be tough.

Swaroop (00:08:39):

And we've seen that and I've been reviewing studies and at one point I realized as a peer reviewer, you're not a gatekeeper. All you're doing is you're looking at the methodology that the research has followed and you're validating if what they did followed appropriate methodology, did they account for the factors that might influence a response or it might bias someone's opinions. And if they've followed acceptable methodology and their results are consistent with what other research studies have said, then like I say, when I'm using a study, I am no one to say if a study is good or bad, there's others in the field who read the study and deemed it fit to be published. So the peer review process really goes far beyond what my opinion is because now I'm relying on more than just my personal opinion. But what others have realized and said, okay, this is a great study. It's fit to be published, and once a study gets peer reviewed, there's really no saying, oh, you know what? I don't think that's a good study because there's differences in methodology which affect the outcome. But as far as the study itself, it's a pretty good one.

Lou (00:09:59):

Yeah. And then it comes down to what you were saying about repeatability. If that was the only study and it was peer reviewed and sent through, but it's the only study that's ever generated that result one eyebrow up at least one, maybe both.

Swaroop (00:10:16):

True. And really what we've seen is as long as you account for the methodology, you can answer why a researcher might decide to put a tractor trailer at the end of the roadway. And as participants when they see different materials, when you see the tractor itself, when you see the retroreflective material, which for that study might be what they're looking for is what's the best possible outcome. And for example, David Curry's study, he looked at tractor trailers at the end of the road and he said, yeah, you can see retroreflective tape a thousand feet away, but in the real world when you've got a guy going 65 miles an hour on a freeway tractor trailer with poor retroreflective tape on it, for someone to use that study as an equivalent to this one, it might not really apply because there's differences in methodology. Dave Curry was looking at brand new tape, our crash has really bad tape, or if Dave Curry study had people going 10 miles an hour and our crash had people going 65. So there's really differences in methodology. And so this is not to say Dave Curry study was bad. He got the results for the scenario he was looking for. And it gives us good information because it tells us the extents, it tells us how far away can you really see that a reflective tip. And then you can build on the research and start narrowing down the studies to find ones that are really applicable to your study,

Lou (00:11:56):

Which is a huge part of what you guys are doing is evaluating studies and determining how they relate to a specific crash. That's got to be a muscle. You've worked so much at this point.

Swaroop (00:12:06):

Yes, because you've got to break it down into what the variables were. And as soon as you realize what the crash is, how the study broke it down, and try and find similarities between what the crash is and what the study is and account for the differences which we've seen being published as far as daytime versus nighttime, how do you account for the difference? And if someone's getting involved in a crash at an intersection versus someone getting involved at a midblock, we've documented and we know what the difference is between these two situations, and you can account for that because the number one question I get during a deposition is, but this study is not what this crash was. Right. And unfortunately for us, you're never going to have a study that's 100% exactly like your crash. And so the best we can do is find research studies that are most appropriate and most similar to your crash, and then account for the methodology and apply it to your testimony, to your analysis to make sure you know what the study did, make sure you know what your case was, and as long as you account for that, you'll do a great job.

Lou (00:13:22):

Yeah. That adjustment to baseline as I was taught that it was called from Jeff, and really the seminal paper on that was Jeff's 2003 SAE paper. Is that right?

Swaroop (00:13:31):

Yeah. So Jeff did a couple studies, 2003, the SAE paper, 2004 was another study where he built on the 2003 study where he really broke down every single variable. And I was just counting this the other day. There's like 129 studies in the 2003 paper and there's 144 studies in the 2004 paper. So that's a huge database to be breaking down. And then what they did is he broke it down into what variables were there and which variables were showing up across all of these studies. So things like where the crash happened, how much information do drivers have, daytime versus nighttime, all of these factors really kicked in and we knew what the difference was. So daytime versus nighttime, a 10th of a second, slower at night versus during the day intersection versus not about seven tenths of a second, a major factor. What we usually tend to overlook when we apply these studies is to see where the clock is being stopped.

(00:14:49):

Some studies said, Hey, I care about when the foot goes off the throttle. Some studies said, I care when the brake pedals push or when the brake lights come on. And some studies said, oh, I want to know where the tire marks stand. And this is a very easy factor to overcome because it's small differences, but studies call it reaction time, break reaction time, perception response time break response time. And so as long as you account for that difference in time because you're looking at two tenths of a second, between two times to three tenths of a second between when the break is pressed to when you start leaving tire marks, or the easier way for me to explain it to a jury is when you start seeing stuff flying off of your passenger seat for someone to experience what 0.4 Gs or hard braking is, it's basically where you hit your brakes really hard and stuff starts moving, sliding across your car.

(00:15:48):

That's hard breaking. How long does it take you to move your foot from the accelerator to the brakes? About half a second. And so have some drivers done it much quicker than that? They have some drivers done it much slower than that. Yeah, they have. But I'm trying to account for what your typical driver is doing. So the average time has been about a half a second for foot movement, about three tenths of a second to start building the brake pressure and start slowing down significantly. So that has been I think one of the main things which a lot of people tend to overlook. And we always give the example from one of your cases where response or IDRR at the time was challenged and you were really able to break it down and talk about Yeah, the study looked at when someone pressed the brakes right after they saw a light, but our crash was more complex. So it really comes down to how well you can break down these studies.

Lou (00:16:56):

It's so funny that you were just talking about that because I was just teaching my motorcycle recon class and we were going over the motorcycle PRT data, and one of the students, Daniel Vaughn, I believe it was said, I see in, I think it was the Ecker study that they're calling brake reaction time when the brake light of the motorcycle comes on. And then there's terminology later that says, break response time is when the motorcycle starts to decelerate appreciably or something like that. Has the nomenclature been established throughout the literature or do you really just have to look at that paper to see where they're stopping the clock?

Swaroop (00:17:33):

You do, because there is no standard terminology across the board because studies get published all across the United States, all across the globe really at this point. And they may call reaction time as foot off the accelerator, a reaction or someone pressing the brakes is their first reaction or something as simple as facial expressions because that's a reaction to an hazard. So somebody makes a scary face, they call that as a reaction. And so there is no standard methodology, but studies do a good job at talking about what they used and as long as you can put it into these groups of is it pedal release, is it break pedal push? Is appreciably slowing or heartbreaking? As long as you can figure out if it's one of those three, you should be covered for that.

Lou (00:18:33):

Yeah, I was laughing as you were talking about Jeff going through all those papers in 2003, 2004. I'm pretty sure he was still pursuing his PhD at that point. And I remember when he was developing the first round of IDRR, I can't remember what the first one was called. I think it was IDRR, but they drive. That's right. And I was one of his beta testers and I would get like three, 4:00 AM emails, and those weren't like, I'm waking up early emails. Those were, I'm not going to bet emails. And I was like, I mean he's obviously got such drive in him and it's clear in how he was able to analyze that many papers and pull out the variables that are statistically significant and make a difference in how a motorcycle or a driver, I'm always thinking of motorcycle Terrance, but how an operator is responding. Speaking to Jeff, I reached out to him before the podcast and he's like, Swaroop is such a good data analyst that he moved here from India, knew nothing about football and kicked all of our asses in fantasy football year one.

Swaroop (00:19:40):

Yeah, I still flaunt that. I had a few rough years in the middle. I blame that because I got pushed very low on the drafting order because I won. But yeah, I mean data's goal. And I think one of the main things that helped me those first two years is I had no biases. I didn't know who was playing and I ended up, it went against me because at the first couple of years I didn't know who was playing. I was like, okay, is he giving me points? I'm going to do that. And ultimately it got me interested in football and I started watching games and I'm staying up late to watch teams who don't even qualify to the playoffs because I've got a player in there. And then I started having favorites and as soon as you have favorites, you're not going to do well at fantasy. So I stopped watching football since then.

Lou (00:20:38):

That's brilliant. Just so you could do better at fantasy. I'm the same way. I try not to play fantasy baseball too much because I know that I'm going to neglect my homework and then I guilty just being back at school, not doing homework or something. But I did play this year, my kids begged me to and I was like, well, I'm not turning that down, but I just grabbed so many Red Sox like, oh, it'll be more fun, but I'm not doing well. How did you get into human factors to begin with? So you got your bachelor's in IE industrial engineering in India, and then you went to get your master's at Texas A&M in industrial engineering as well with an emphasis in human factors. Did you always have that on your mind or was there something along the way in your educational path that drove you that direction

Swaroop (00:21:25):

Right from the get go, I've been someone who wants to be able to work with physical objects, so coding or computer science, which growing up in India is a big deal because that's where the big IT boom was. That was never my forte. That's something I never wanted to get into. And when I did industrial engineering for four years during my bachelor's and when I started applying for school here at Texas A&M, I saw that human factors was one of the fields of expertise you could get into at A&M, and it drew my attention right away because I knew I don't want to be doing supply chain and operations for the rest of my life. It's something I fear doing. And so I saw human factors and I was very interested because now I get to measure interactions. So how are drivers behaving?

(00:22:23):

How are people interacting with vehicles, electronics? And at university, I even worked with a nuclear reactor we had on campus, and what it does is it gives you an insight into how people are interacting with their environment. So the roadway, your car, your dashboard, and it tells you how you can overall improve safety because you are trying to make these as efficient as possible, you want to reduce accidents. The very origin of human factors happened really after the nuclear reactor meltdowns, the three mile island. And then we had airplanes crashing and they said, oh, it's human error every single time. And then they started realizing you're really pushing humans beyond their ability to grasp these small changes. And that's where a lot of research started building up. And today we are looking at the most common interaction people have with machines,

Lou (00:23:33):

Which is driving arguably the most dangerous. Even if you consider nuclear meltdown right now, like so many people, 40,000 a year in the United States worldwide, I don't know what that number is. It's got to be huge.

Swaroop (00:23:44):

It is a big number. And what's scary is the numbers don't go down, they only increase. We are implementing a lot of safety nets that we are trying to put in so that we reduce these fatal accidents, but while we start reducing the number on one front, we are also seeing more people on the road every single year, more miles purpose and driven every single year. And it really comes down to the numbers. The more likely you are to be on the road, you're just increasing the odds of being involved in a crash. So yeah, human factors, it really drew my attention and I was like, okay, this is where I want to specialize in because the very first project I worked at A&M was why emergency vehicles are being involved in a lot of crashes. And these reports all started seeing inattention and they're trying to figure out what kind of inattention is this. And I spoke to a sergeant from a police department during this study and he said, we are seeing a lot of dear crashes in my PD and a lot of drivers veering off the road because they said they saw a deer and were involved in a crash. And he said, we don't know if that's completely, there's no hair. I haven't seen any hair yet. I haven't seen anything here. And the first time I saw sat in a patrol car for research

Lou (00:25:19):

Was before that. Yeah,

Swaroop (00:25:21):

That's

Lou (00:25:21):

A different story.

Swaroop (00:25:24):

And I saw there's a laptop in the car and I'm sitting here saying, I'm looking at all this research about cell phones being distracting and now you expect the same people to sit and interact with a laptop with a larger screen with a more intricate keyboard. It's going to have negative effects. And really we looked at how this information is so vital to a first responder and we were looking at how do you make it more efficient? How do you keep them in but still have their safety up because again, they're responding to an emergency situation. You want them to get where they need to be because they're going to be providing aid to someone who's who needs it. And so you want them to get there as safely as possible

Lou (00:26:14):

And you want them to know what they're about to encounter when they get there and if there's going to be backup. There's a lot of information to process. I've always thought about that on a couple different fronts where it's like, okay, one that's a giant distraction, could I do that while driving my truck? No, I'm not a good multitasker to begin with. Second of all, it makes me think, there's a few cars that I've been in recently that are more modern with a giant screen and they just by default seem to have the brightness so high that it eliminates some of my night vision that would give me good visual acuity out of the windshield. And more recently they seem to be modulating that in a better fashion, but imagine that's a bad combo. So did you guys arrive at any solutions and have law enforcement officers around the country implemented any measures to mitigate that risk?

Swaroop (00:27:08):

Yeah, one of the main things we found, at least the study I worked on was a literature review. So I looked at hundreds of study and we wrote a white paper so that it could be used by people who design these systems. And one of the main features we saw was mainly voice to text because the number one distractor or where we've seen the worst effects of distraction really occur when you've got something that takes your eyes of the roadway where everything is, but this is where your information is. So something that's taking your eyes off and you have to manually interact with it. So say let's say you want to type in the number or you have to type in or scroll through it. So these tasks which are both visual and manual, that's where you start seeing the highest effects of distraction. And so what we were looking at is more verbal information, which we understand through dispatch is limited, but we got to try to keep them informed as much as possible through our verbal information. Again, as far as information about navigation, give them more spatial context so that you are not engaging as much of their attention and their need to interact with it. Because now as soon as I have to type something in on a quarter keyboard, now I'm picking from one in 26 alphabets, which is hard to do.

Lou (00:28:43):

I have not, I'm not good at that. I try even I had a Tesla for a while, I think I'm going to get one again soon in one of the biggest safety benefits to me other than crash worthiness, which is super interesting because obviously the crash worthiness of all of our vehicles has progressed in a very substantial fashion over the past 10 years, but our fatalities are going way up. It's like we've gone from 30,000, we'll talk about this, I have the numbers here, but it's like 2013, it was 33,000, and then the peak was COVID ish, 2021, it was like 43,000 in a year. That's a giant jump. But when I was in the Tesla and I could just put it on unquote autopilot, it's not like full, but it's handling everything. If I'm in a right situation, I obviously wouldn't be dumb and put it on when something weird is happening, but I could just change the song or respond to a text from my wife where she needed something quickly. Where am I supposed to take our daughter or something like that. It would always be the other way around, by the way. But that kind of thing, super valuable. And then I get back in my Toyota Tundra, great vehicle, love it, but I cannot really even switching a song and Siri sucks nowadays. So even switching a song is a big, I feel like it's a dangerous task for me, so I try not to do it unless I'm at a stoplight or something.

(00:30:10):

So you were talking about the nuclear reactor you had access to and the human factors stuff. So it was an organic attraction to human factors within ie, which has always been a subset of ie. So I imagine you were kind of exposed to it and then organically drawn to Texas A&M. And then they also had the human factors in cognitive sciences laboratory, which sound like super appealing, like a good time, almost like the HPL at UMass Amherst. You had a simulator, what was that lab like? And you were exposed to vehicular stuff there as well?

Swaroop (00:30:41):

Yeah, so at A&M, the primary lab, the professor I worked with, he ran the human factors and cognitive systems lab and a lot of research we were doing in the lab at the time revolved around driving behavior. We looked at driver distraction. We actually ran a simulator study where we had a laptop, which is programmed to be like the MDT, but we had people come in and we compared their cell phone interactions versus the MDT and we saw on some fronts MDT was better because you have a bigger screen, it's easier to parse through information. But in some tasks where you had to manually input information, MDTs were worse. And the great advantage we had is we also had the Texas A&M Transport Institute, which was across the campus. So we got to collaborate a lot on research with the Transport Institute as well as the research we were doing in our lab. And we looked at what's the best way to inform your drivers. I know my lab mates at the time were looking at vibration feedbacks, which are very prevalent in cars today. Is

Lou (00:31:57):

This ADAS stuff?

Swaroop (00:31:59):

No, this is more, yeah, true. It is like ADAS stuff where if you want to warn someone and if you see that they're already visually engaged and they've got music playing or they've got some information going on, how do you overcome these overload scenarios? Because what we've seen is when we have, let's say we've got a big basket full of resources and we've got two balls which represent our visual system, two balls, which represent our auditory systems two spatial, which is just to know where you are. Now as soon as I start engaging them, I start losing these balls from my basket. And so for me to re-access them, you need to switch, so I need to put it back and then pull it out again. So we're not very good at switching. So we were looking at what else can we access? And what we realized is our sense of touch.

(00:33:00):

And so when you're driving now you give them information with a vibrating steering wheel or in your seats, which give you more accurate information where if I've got my blind spot warning going off on the right side, just the left side, just the left side of my car vibrates so I know where to look. Or if someone's cut coming in from the right side, it vibrates on the right side and I know to look there so we can really access good spatial information like that. And we see that implemented in cars today where the car I drive gives me these warnings or when you're backing up and you're getting close, you don't want to overwhelm one single source. But now I can pull out a new ball from my basket and I can say, okay, I'm not going to try and switch a task, but I'm just going to all access what I already have.

Lou (00:33:55):

That touch one seems really powerful. I never heard it described like that. That's really interesting because I'm just thinking of it from a relatively layman's perspective. I'm driving the car, my visual system is already tasked with so much. So then you have a little orange.by your side view mirror that says there's a car here. I have to imagine that I'm more likely to miss that orange dot than I would if my seat vibrated. So how many manufacturers are implementing that? I remember I've driven a couple Chevys, the pre-impact will vibrate my seat, but I've never seen a car where blind spot monitoring is anything but a light.

Swaroop (00:34:35):

So I know the Cadillacs have these systems in place. I know there's more manufacturers who are implementing it, and the Fords do have this too. The newer Fords, I think they've started implementing it and it's good information because like you were saying, a little orange dot. And my wife's car was terrible at this because I used to drive around the road and it used to give me a warning, I still haven't figured out what this warning is because it flashes on for a fraction of a second on the dashboard, it's a yellow light and by the time I look down it's gone and I look up, I'm not hitting anything or nothing's in my way. So you have to convey as much information to your driver as possible. And so just a little flash and I try to figure out what that is now that's taking away from the entire warning because at some point I'm not even going to care about it.

(00:35:35):

I am going to say it's nothing. And when the system really works, it's like the boy who cried wolf and I'm going to ignore it because it's given off so many false alarms. And that's one of the fields which a lot of manufacturers are trying to address right now. False alarms to the extent where you want to turn your system off. And we are looking at forward collision warning and automatic emergency braking. And when we look at real high speed rear renders, are there some systems which are capable of recognizing the speed difference? Just using a radar there are, but it's going to go off for a car and it's going to go off for a flyover above you, or if it's going to go off for a really large sign that just maybe offset the roads to offset enough for it to pick it up. And so if I'm more likely to encounter a bridge or a sign than I have to encounter a stopped vehicle. And so people started getting annoyed and in order to reduce the annoyance, you reduce the ability of these systems. And so while the system might be capable of addressing that crash, you might have to turn it off because you don't want it to have the opposite effect where you don't rely on it at all.

Lou (00:37:02):

So you've got to make sure there's very few false alarms. And then to your point earlier, it's like, well, we can't inundate the driver's sense with so much information that doesn't do anything. Yeah, it was presented to their eyeball, but it didn't make it to a decision to a change in behavior because they were overwhelmed. And the touch is really interesting. One, the thing that's so interesting to me about the touch is I think about it the way that you described it seems like it's so clear, but there's so many manufacturers that are not implementing that sort of thing. At the same time, it doesn't seem like there's a huge, and I could be wrong about this, but it doesn't seem like there's a really tight relationship between manufacturers and people like you with your understanding of the human factors and what's necessary to really get drivers to change their behavior. Alright, so now you're at the Driver Research Institute. You're in Long Beach. I am. Cool. So you got the east coast, you got the west coast. Tim is east coast too, right? Tim is east coast. And then what are your main roles to give people a little background the way that I see it, you've got the consulting side and then the research side. What are those look like?

Swaroop (00:38:24):

So right now I'm involved in four major parts of the company, and the first is obviously consulting. So I get hired on as an expert witness. You go into court, you talk about your analysis, applying the research. And a big driving factor at why I love working at Driver Research Institute is because research was something I really liked at the university and it's something I wanted to continue doing. And a lot of that research just flowed directly into where I am today. We are literally looking at driver behavior. We're trying to fill in these pockets of information that we're missing for what is a driver doing at a specific situation. And we are looking at eye tracking data, we're looking at nighttime recognition, perception, response time, which obviously there's a lot to explore there. And the research helps us on two fronts. Number one, when I testify, it backs me up because it's almost like having your foot on the ground where I've looked at the data, I've looked at what drivers have done and I published these studies so it's easier for me to rely on them because now as soon as I write a study about one crash type, I'm educating myself about all the other research that's been done on something similar and now I can testify with more confidence on those fronts.

(00:39:59):

The research also helps us to make Response much stronger. So Response or IDRR, it's really thousands of studies, we were saying 900 plus. But then we looked at if you really do an in-depth evaluation, these studies rely on a more robust system of even larger set of studies. So we're looking at over a thousand studies where we're trying to break down every single aspect of driver behavior and we are looking at perception, response time, nighttime recognition. And a strong focus for us now has been what about the situation even before there was an emergency event? Is your driver speeding where a majority of drivers choose not to speed is your driver choosing not to stop at these flashing beacons where a majority of drivers do. And what that does is it's putting some drivers in a situation in an emergency situation where they might respond really well, but if they failed in the non immediate phase and then get all the way to the emergency phase and do really well here, well, you're kind of falling off the wagon back there. And so you want to address every single aspect of that. And obviously the fourth part of what I do is teaching. We've been teaching with IPTM and Tim's taken Tim Maloney, who's at Driver Research Institute now. He's taken a big part of teaching with IPTM and Northwestern at this point, and we teach our own classes as well. And so there's four main things, research, teaching, software and consulting, which keeps us very busy throughout the year now.

Lou (00:41:54):

Yeah, that's a lot to do. I know you were saying computer science was kind of in your DNA, do you work on some of the coding for a response?

Swaroop (00:42:05):

No, computer science is if you push me against a wall, I will quote, but if you give me the option not to, I will never choose that option. And so yeah, we work with a programming company who does a really good job for us ars, and we've been able to really move response IDRR out of Excel into a completely web-based interface, which is now more human factored for a human factor software. And we really love it because now we don't have the limitations of data inputs. We can add thousands of headlight models into the program and it works without a glitch. And so having that in place where we've been able to code and we recently had an intern who specialized in computer science but worked with the Transportation Institute. So he gives us the best of both worlds. And so having him come on board at the end of the year, I think that's going to even make us more in-house reliant and do a lot of the work from in-house, from the computer science point of things too.

Lou (00:43:25):

Yeah, that's awesome. And I think most people, because almost all the people listening to this will be recons of some sort in the reconstruction forensic field. I guess not everybody, but most of 'em. For those that don't know, do you have a quick summary of what Response is? I could come up with my own, but you probably already have a better one.

Swaroop (00:43:45):

Yeah, so response is basically an annotated bibliography or in simple terms, it's a compilation of thousands of studies and they've been broken down by different topics like perception, response time, nighttime recognition, non immediate hazards, and then we further break them down by crash type. And one of the main things if anybody looks at my CV or Jeff CV at this point is a majority of the presentations we've given talk about why 1.5 second number is not the most appropriate. And so the main goal is to give you the data as it's, and talk to you about how response time has varied based on different crash types. So you can go into the program, find the crash type that matches the case, you are investigating input what parameters were in your crash into the program. And the program tells you what the most appropriate response time is if you're looking at response time for your specific crash.

(00:44:56):

And it lists all the sources of published and peer reviewed research studies, which looked at this crash type and which said, this is what the average driver's done. And our main goal here is not to just give you a one single number, but tell you what a majority of drivers do. So us being human, it's inherent that you and I, if we get tested on a single task, we may not respond similarly. And then now I tested across thousands of people that variance or standard deviation is going to be larger. So when you rely on what a majority of your attentive drivers are doing now, you're not setting someone up against a higher standard because as soon as you say, well, his response is slower than the average, you're saying it's okay for about half the people to crash and the other half not to.

(00:45:52):

So you want to account for what a majority of drivers will do and response helps you with telling you what the average drivers has done. But what someone else who's really fast on the fast end of spectrum, someone on the slow end of the spectrum has also done. And by incorporating the research, by giving you direct access to where this information comes from, it's very transparent and you can go out and spend your own time and look at all the research studies and support your opinions or response. Just makes it easier for you to know which studies apply to your crash and you can apply it and testify to it because now you're relying on information across thousands of participants as opposed to your opinion of a sample size of one and which makes your testimony robust.

Lou (00:46:49):

Going back to what we were talking about earlier, it's like you don't want to just pull, find one paper that talks about what activity you're analyzing, what the average response is. If you can have a hundred of 'em or 200 of them, great. And then the bibliography is phenomenal. I'm generally looking at the motorcycle side, so I do have the luxury because there's not thousands of motorcycle papers. I put each one of those in the file, I read every single one of those and understand every single one of those. Check out the adjustments that are made to my initial calculation or adjustments made to those studies for my subject crash. And then I feel very confident walking in. I know every study that it was based on. Fortunately some of those I'm an author on, and so, and that helps a lot, but I know what the adjustments are, what the papers are.

(00:47:38):

So if somebody comes at me and is like, well, this is different, and it's like, well, yeah, I can tell you it is different and here's exactly how it's different and here's what the literature has shown. So that program's great. And to your 1.5 comments, we used to walk around at conferences and kind of chant 1.5 just to mess with Jeff, 1.5, 1.5. And the joke there, of course, I don't think it's happening so much anymore. I hope it's not, but 20 years ago, I think that people were just saying, well, average response time is 1.5, let's see how their response compares to that. But of course there's a lot more to be taken into consideration.

(00:48:23):

Alright, kind of getting into the Fed fatality trends and stuff like we were talking about, so I have the numbers up here now. We went from 33 fatal crashes in 2013 to 43,000 in 2021, which was the peak. And the whole theory there, and I don't know if you have more to expand on this, but it is just kind of like people were a little bit sick from COVID and they couldn't think clearly, and there's not a lot of traffic, so they're driving fast. And because there wasn't a lot of people on the road, they felt kind of careless. So they took off their seatbelt and put all that together and we hit our peak, which we've been declining from now, but not much. But that's a 31% increase from 2013 to 2021. And then I just put on there WTF because we have crashworthiness, we ADAS, we have safety campaigns, we also have cell phones.

(00:49:12):

So the other thing I just wanted say before opening that up to you, I think you probably have a better take on this. The other thing that people don't talk a lot about, so I looked it up as, okay, yeah, we have 43,000 fatalities, and I tell my kids this all the time while we're driving around, hopefully they're not traumatized, but I'm trying to help them understand how serious of a task this is and one, don't distract me while we're driving. And two, when you start driving, this is serious. This is one of the most likely ways you'll meet your end from 16 to 24 fatalities are one thing, 150,000 major crashes like on the Cab Coast score A where in California that means essentially incapacitated. We have to tote you away, change state to state, but 150,000 of those a year. So there's a lot there and we're getting worse for some reason except for the past couple of years. We have a little skimpy decline, but it's not nearly as substantial as that incline was. So what's your take on all of that? And I don't know if you have studies that help you understand that or if you've just seen a lot from your casework.

Swaroop (00:50:26):

So to your point of increasing crashes and especially fatal crashes, the trend, at least during COVID, where we hit those two peaks were what we call outliers. And like I said, it's a major factor of fewer cars in the roadway drive is going faster, more distraction. A lot of these factors were contributing to it. But even if you look at crash stats over the last couple of years, we are still above where we were in 2018. Like I said earlier on, the number one factor is we drive a lot more than we used to before and more people on the road, more cars on the road, you're just increasing the likelihood of someone being in a crash. Second thing though is when we look at ADAS systems and how they're going in and what part of this large data set do they really address?

(00:51:28):

One number we've seen consistently is do ADAS systems work? And when I talk about ADAS systems talking about forward collision warnings, automated emergency braking, even blind spot warnings or alerts, all of these systems have reduced crashes by about 20%. 20% is a large number because you're looking at, I think it's estimated around 300,000 injury related crashes in the United States every year. And if you take 20% away from that, it's a pretty large chunk. But the 20%, while it might sound impressive, we have to look at which part of the 20% it really affects. So the data showed that forward collision warnings abs, they reduce 20% of rear end crashes, but these are rear end crashes with lower speeds. So you're really looking at property damage crashes or minor injury crashes. So while they do a good job at reducing these fender benders or something slightly more serious than that, systems today are not still capable of the higher speed differential crashes where you've got someone going over 55 crashing into someone going less than 10 miles an hour, those account to about six fatal crashes every single day in the United States.

(00:52:59):

So we talk about, it's very unlikely you'll see someone stopped on the highway, but if you do, the chances that you'll crash is very, very, very high increased crash risk by about 500 times the normal rate. So if I'm going to go out and drive today, and if the chance that I get involved in a crash today is one, if someone parks or if I park my car in an interstate in free flow traffic, the chance that I get hit is up by 500 times, which is a huge increase. And so the ADAS systems today don't really tackle that. We are seeing improvements. I know a lot of testing that Alan Moore and more and Shawn Harrington are doing right now are really testing these limits. And we are seeing that numbers slowly creeping up. Right now they're really good up to about 35, 40 miles an hour, but those numbers might start slowly creeping up. We want to be able to get to the 60, 65 miles an hour to really bring the rear end crashes down. A scary trend is also in pedestrian fatalities. Those numbers are going higher and higher. And

Lou (00:54:16):

I think over that same period where we were just talking about a 31% increase for pedestrians, it's 70 something percent. 70 something percent.

Swaroop (00:54:25):

Yeah. And the scary thing about pedestrian crashes is nighttime situations, low lighting conditions contribute about 70 to 75% of pedestrian fatal crashes every single year. And the thing with crash stats is, and everyone can access this, go to FARs dataset and you can generate tables based on what you need. The thing is you see the trend where while I'm seeing an uptick in the fatal crashes, so we go from 33,000 to 38,000, the trends as far as who's involved or what contributing factors exist, the percentages stay fairly consistent. So for example, nine to pedestrian crashes, they've hovered around the 70 to 75% every single year. We are not going from a 50% this year to 80% next year. So they stay consistent. And so while half of the fatal crashes involving cars happen during night, 75% of them for pedestrians happen during night. I got to stop selling black. Unfortunately, it's our choice of clothing, which is,

Lou (00:55:47):

You would probably be seen, I would get tagged all day

Swaroop (00:55:50):

And I, I'd get seen on lower speed roadways, but if you're a higher speed roadways, I'm not helping much. The thing I always joked about before having kids was, when I have a kid, I'm going to find all retroreflective clothing for me seriously. And now that my daughter is getting to an age where she's going to start walking very soon and running instantly, I think I have to start either sourcing it or maybe a side

Lou (00:56:18):

Project. Oh, that's a good biometric

Swaroop (00:56:21):

Toddler clothing. Biometric toddler clothing. Because the number one place where young kids under the age of five get involved in crashes is in their driveways or right outside of it. So tragic. And so with pedestrian fatalities, number one thing, nighttime is an issue. Second factor is a majority of them happen at non intersections. So you've got no crosswalks, you've got someone crossing across a median, and about 80% of pedestrian accidents happen at non intersection locations.

Lou (00:56:59):

So yeah, it's not a matter of implementing better crosswalk strategies.

Swaroop (00:57:03):

Correct. Right. I think over the last five years that we have fatal crashes, we had about 10,000 of them happen at intersections and about actually, yeah. And at about 20 something thousand happen at not non intersections. And some of them had painted crosswalks, but not a major improvement. And so every time, especially in the first couple of years, I worked with Jeff working on these nighttime cases. I was like, there was a street like 15 feet away and this pedestrian chose to cross here and was involved in the crash. And again, this is our bias coming in, which is why are all my nighttime pedestrian crashes happening away from streetlights? That's because you don't get hit when you're there, but you do get hit here. And that's something to constantly remember. What factors help in biometric clothing? Safety vest? A lot of times we say a majority of them are drunk, and the stats show that not really intoxicated drivers contribute to about 11 to 12% of pedestrian fatal crashes.

(00:58:20):

The driver, not the pet, the driver, not the pet. And when we look at intoxicated pedestrian, the numbers around the same, slightly higher than the intoxicated driver. And we see that a lot of these pedestrians are just in the wrong place at the wrong time and get hit because this is some of the research we did with police department in Salt Lake City. They had the same issue. They said the police officers there said fatal crashes with pedestrians and injury crashes with pedestrians. It's just going up year by year. And they were at 50% at nighttime to about 70%, which is very close to the national statistic. And we got the community to volunteer in these studies, and we got the police, we got the community, and we were trying to see where these differences are. And the number one factor we've seen, and this was a great study and I think everybody should try and reach out to your community and try and do this, is we parked a car on a closed off roadway, no lights, just had the headlights on.

(00:59:31):

And we had people walk into the road and said, start walking towards the car at stop where you think people see you. And they were wearing the clothes they were wearing. So some wearing all dark, some in light clothes, and consistently pedestrians overestimated their visibility by a factor of 1.5 to two times. So if I can see a pedestrian in dark clothing, a hundred feet away between 150 to 200 feet away is where pedestrian stopped. My participants stopped and they're like, yeah, I think he sees me now. I had one participant who stepped into the road and the day before trying to think of what we do, and I'm saying, 500 feet away, pretty good number. They're going to walk a lot, but I think 500 feet keeps them out of the bias and I can get a higher end spectrum. And one of the participants, she's wearing gray pants and a blue shirt, and she walks in, then a she walks 10 yards and she looks at me and I said, yeah, yeah, you can start walking. She's like, no, I'm done. And this was a mistake on me as an experimenter. I didn't realize what was going on. And then she's like, oh, okay. And she kind of hesitated and walked. At that point, I realized, well, this was her answer. And she walks another five or 10 yards and says, yeah, I think he should see me now the driver in the car. And we are still about more than 400 feet away.

Lou (01:01:04):

And that real number for somebody in that clothing is under a hundred feet, isn't

Swaroop (01:01:08):

It? It's someone in dark clothing between a hundred, 250 feet lighter gray clothes slightly. And so there's a big dissonance. One is we overestimate how far we can be seen. And the second is we're not very great at telling speed. So now, if you've got someone who's speeding down the road towards me, I kind of think I have more time to go across the roadway. And so I take my chances and I walk in thinking he's going to see me and thinking he's going to be slower. And that difference in what we think is happening and what is really happening leads to a lot of these midblock crashes.

Lou (01:01:50):

That's one of the things that I've consistently told my children at night is never assume that the driver sees you. Just don't do it no matter how close you are, because I know we're so bad based on that research. Others at really appreciating what the driver's up against. It's just like, just pretend they can't see you at night and move accordingly. But then like you said, you get lulled into a false sense of security when you see the car far away. And we're not good and we'll talk that with respect to the lead vehicle stuff, but the way that our brain works is like we're not that great at detecting closing speed if we just have a couple of headlights depending how far away they are and what our angle is. So things can get dicey really quick.

Swaroop (01:02:32):

I think the unfortunate part there is a lot of this information that we know and we talk about at classes, it doesn't trickle down to the people on the road every single day. And what that does is, while we think everyone should know this,

Lou (01:02:50):

Unfortunately

Swaroop (01:02:51):

They don't. And so if you are asked about, Hey, what do you think this pedestrian should have really done in this crash and not stepped out? The answer is, I don't know because I know this information and I might choose to do things differently, but your average Joe on the road doesn't know this and they make the decisions they do. So it's really go back to what people do on the roadway and not think about what a sample size of one would do in this case.

Lou (01:03:21):

Yeah, we bring a lot of bias in considering our job to each crash, I guess. And somebody asked me that on the stand recently, Hey, do you find yourself to be biased towards motorcyclists or against them? And I was like, well, I'm a motorcyclist. A lot of my friends are motorcyclists. So if I biased any direction I was working, not on the motorcycle side in this case, but I was probably more towards the motorcyclist. I'm generally empathetic to what they're going through, but I try very hard through the scientific process and I think you'll see that I did that here to take my bias out of it, trying not to bring my own personal baggage to this case. And that's a good point. You got to look at it through fresh eyes really just scientifically, because we've already had our, we've been programmed the way we have because of what we do. So it's a little bit difficult. I want to just make, it's kind of a joke, but also kind of sad. You said the average Joe doesn't know that. And I was looking up the stats and prep for this. 70% of the peds are male. What are we doing?

Swaroop (01:04:26):

Yeah.

Lou (01:04:27):

Where are the dumb ones walking out in front of the cars? I shouldn't say dumb, but those are my words, not yours. I don't really work pedestrian cases. So the most common pedestrian issues I imagine that you see, and for those that aren't watching and are just listening, the reason I said that I would be pegged by a car and Swaroop wouldn't is because I'm in a black shirt and Swaroop is in a really bright white shirt. Clothing seems to be a big part of the, well, it is a big part. It doesn't seem to be, it's clearly statistically significant, a big factor when it comes to detectability of a pedestrian. But what are the most common pedestrian issues? You see the DUI stuff or DUW if you're walking, I don't know what the wording is there, but it doesn't seem to be a huge factor. So what are the most consistent things that you see when you get involved in these ped cases?

Swaroop (01:05:18):

Clothing. Clothing is definitely the number one factor. If we see the crashes we've got, they're definitely overrepresented by pedestrians wearing darker shades of clothing. The second is the speeds on the roadway, just crossing on a roadway where there's not enough light, the speed limits are higher, and yet pedestrians walk out into the roadway. So even if it's an avoidable crash in a residential area, it might not be avoidable in these higher speed roadways and just lack of retro reflective clothing or what we've seen is with bicyclists or even pedestrians, there's not enough information for people to really try out these improvements.

(01:06:12):

We did see a trend with the people who are really involved. So people who really are involved in running or people who ride motorcycles or bicycles, they've started adopting a lot of these technologies. So I've seen more and more and motorcycle riders wearing a high visibility vest bicyclists wearing, I forget what the brand was, but for a while it was on every single page. It's just a completely retro reflective jacket and markings on the bicycles and runners having reflective markers on shoes, wearing a running vest when they're out there. And these are really things that improve your visibility by a large factor thinking that, hey, my shorts have this Retroreflective logo on them, which is like two inches wide and it doesn't push the needle. That has not been enough for drivers to say, oh, there's the pedestrian. You really want to improve your visibility. You want to give approaching drivers as much information as possible. And like I said, intoxication doesn't contribute to a majority of these crashes. If it was a factor, we could really address them, but not to say they're not, but it is a healthy percentage there. But a majority of them don't involve either. And it's just pedestrian's decision making, driver's speed choices. And when they step out into the roadway, they're stepping out where they just haven't had enough time.

Lou (01:07:52):

So if we could help pedestrians understand that at nighttime they're not detectable when they think they are by a factor of two or more ish, and two, if you are going to be out walking at night, I mean you can't always do this, but even so from what I understand about the literature, if you have retroflective clothing on of any sort, that's a benefit. If it's something that gives the driver an idea that you're human and you move like a human, the biometric stuff like retroflective materials on knees and shins and ankles and elbows where they can see that that's a moving human, you're very unlikely to get hit, I imagine, unless the person's drunk and speeding or something. So if they could dress appropriately, be made aware that they're not easy to see and then speeding, I don't think we're ever going to make any headway there. That's just my take. I could be wrong, but those two things would make a huge difference from what you're saying.

Swaroop (01:08:56):

Yeah, yeah, definitely put on as much, no, let me rephrase that. Put on appropriate amount of retroreflective clothing, give pattern patterns. The main thing that helps drivers or us, because we are drawn to pattern recognition. And so now as soon as I see two things near the ground moving up and down, I see elbows moving up and down and I see a form of a human shape. I don't have to see the entire body. If I get these enough moving cues, it helps. And in the lack of movement, unfortunately it doesn't. So if you put a running vest on, try and run or walk, and that's going to improve your recognition by a lot. If you're just standing

Lou (01:09:47):

In the street in a retro

Swaroop (01:09:48):

Vest, not enough context, not enough context because now you look like a sign that might be on the side of a road or further down the roadway. So you really want to break pattern down. And I want little pieces of information and places that help me a lot. Like I said, knees, ankles, elbows your torso, light them up. And that does a really good job at improving your

Lou (01:10:13):

Visibility. So I got to get my kids some retro stuff. They already know that they can't be seen, so that's good. But in the cities, I imagine, but a lot of this stuff is midblock. So I'm just thinking if you send your kids out in the city and they're 12 or 13 or whatever, somebody that's still within your care and is going to listen to you and maybe you can help them buy clothes, but putting on some retroflective materials, even the Nike pants or whatever wind pants we used to call 'em when I was a kid, if it had retroflective piping all the way from the ankle up the outside of the leg, which a lot of them do, I imagine that would give that pattern.

Swaroop (01:10:51):

Yeah, they do.

Lou (01:10:53):

The interesting study that I'll always remember that relates exactly to what you were talking about is that Retroflective tennis ball study where Jeff et al, I think this was pre

Swaroop (01:11:06):

A part of it, so I helped compile their studies and helped write the study. Wasn't a part of the data collection though.

Lou (01:11:13):

Okay. So yeah, can you describe that one because I love that study and it's just clear indication of how discern ability and visibility are a completely different thing.

Swaroop (01:11:23):

True. Yeah. So the study was very simple. We hung a tennis ball wrapped in retro reflective tape in two locations, one directly ahead in the center of the road and one to the side off to the right side of the roadway. And this is a great study because when we talk about studies where you say, Hey, I'm going to give you more information, every time researchers have done this, participants tend to do really well. But this study, we informed the participants that drive down the road, there's going to be a ball of light when you see it, don't hit it. And they just drove down real world scenario. So you're driving on the real road in interacting with other traffic if there was any coming across. And what we found was very interesting. When everybody hit the ball, pretty much everyone hit that tennis

Lou (01:12:22):

Ball and they, they were told they were going to encounter it.

Swaroop (01:12:24):

Yeah, you're told you're going to be driving down the road, you're going to see the ball when you see the ball, don't hit it. And everybody hit the ball about, so they stopped the car and they walked back and after they hit the ball, they were like, oh, right. And it was a constant response. And once they did, walked back with them and asked them, how far away can you see this? Now that they knew where the ball was and they knew what they hit, they could see it over 500 feet away. Now you know exactly what that ball of light is. But without the context, only about 20% or fewer, fewer drivers actually saw the ball before they hit it. And even smaller percentage of drivers actually tried to do something before they crashed into it and they swerved a little, tried to hit the brakes, and the results were far worse when the ball was on the right side.

(01:13:20):

Because once we have that angle, and like I said, we create a pattern recognition, but sometimes it goes against us because now I'm thinking that ball on the right side now is really, it's not going to be floating out there, it's probably attached to someone's porch. It's going to be like those reflective markers people put out on driveway so that they know where to turn in or a streetlight of a thousand feet down the roadway. And so the constant answer we got was we all thought it was something further down the road attached to something else. Nobody thought that was what we wanted. And so if you've got a bicycle light that's just a small ball on the back of your bicycle, or if you're going the other way and it's just a single ball doesn't help much because the main thing is you don't have the pattern. All I see is a ball and I'm trying to make sense of what that ball of light is, and it just merges into the background. It's probably going to be something down the roadway. So if you've got one tail light out, if you've got a bicycle light, all of those make it so much more difficult for us to really know the true character of what's in the roadway.

Lou (01:14:34):

So if that light is blinking like you see on the back of a lot of bicycles nowadays, will that help? Or do they need to do, what should they be doing?

Swaroop (01:14:43):

So blinking lights, what we've seen so far is they do a good job at attracting your attention. So as I see a flashing light in a completely stark environment, I'm going to do a good job, but then now I add that blinking light and there's of a light a sign that says traffic signal ahead and that's blinking. Or I'm in a work zone. Now again, I'm really relying on that one flashing light now I don't know what it's really attached to. The second thing is, while I know there's a flashing light, I don't know where it is because again, I'm not getting any other queue apart from the light. So while it might improve your response by a small amount, we've seen flashing lights in general, about a third of a second is a faster PRT. So on lower speed roadways, they might move the needle. But again, when you're going in a 45 miles an hour or more, just the distance that you need to bring your vehicle to a stop, even if you recognize it early enough, that makes it much more harder for us to just avoid

Lou (01:15:55):

It in time. So then what about the reflectors on the pedals? You know, has that been studied? Is that a good idea or do they need to have kind of retroflective things on the backs of their

Swaroop (01:16:03):

Legs? So the pedals are again tricky because now it depends on what the bicyclist is doing at the time. So if a bicyclist is coasting, he's not pedaling. Now I don't get that up and down motion of these pedals. If his foot is angled forward, if his foot is angled backwards while they're pedaling, now I don't get the reflector. If their footwear, if a little bit of a heel covers that back pedal, again, I don't get that information. So in an ideal world, so the reflectors not retro reflectors, so you don't see them as far as DOT tape would, but you still see these reflectors from quite a distance away. And in an ideal world, I see them moving up and down. I see reflectors, they do improve recognition more than if it was wasn't there. But when we investigate crashes, that's hard to determine. Unless you've got video from a dash cam, it's hard to know. So if you've got a bicyclist with these pedals, you're going to need to do some testing. But as far as just improving your safety, you get these really cheap reflector bands on Amazon and they have retroflective, the beat typewriter reflectors that just Velcro on across your foot and just put them on before you go out for a ride. And what that does is now you're not relying on something that may or may not work, but now I'm seeing motion.

Lou (01:17:40):

So it's like an anklet.

Swaroop (01:17:42):

Yeah, it's just like an eight inch piece of Velcro, which has FL meat.

Lou (01:17:49):

Four bucks I'm sure. Something

Swaroop (01:17:50):

Like that. Yeah, for 10 bucks I think you get an entire runner's vest with ankles and elbow Velcro things and it's so cheap. And I've always told police officers when we are talking about when you're out on the roadway and you got a clear debris out, somebody calls in that, Hey, someone, I see a big thing in the road and you put your flashing lights on and you park in a place which is safe so that your car doesn't get hit, but now you are picking up everything on the other side of the road. Make yourself as visible as possible. I've heard people say, I don't want to attract attention to me, but no, just attract as much attention to you as possible and Velcro, right, throw them in your back and just put them on and just walk out onto the road. And that's going to go a long way at improving your recognition.

Lou (01:18:46):

Really cheap, simple fix. I love it. My kids are now at the age where they're riding to friends' houses and stuff, so they'll come back, sunset I I'm sure it's not too long before they're riding their bikes at night, coming back from friends' houses. So I'll equip them with those and see if I can get some adherence there. That's going to be the most difficult part. But speaking of that, what advice do you have? I think I kind of took away my own things, but what advice would you give to parents regarding pedestrian safety for their children? We were talking before we were recording, you worked on this brutal case down the street that I think kind of traumatized everybody in this area. Two children were struck fatally in a crosswalk, I believe. Just a brutal case. So yeah, anything that can be done to prevent things like that from happening in the future, that was daytime. I'm going to just say it didn't appear to be the children's fault. I didn't do any forensic analysis of it, but it was some speeding and some racing and everything going on. But in pursuit of minimizing families having to go through that, what can parents tell their children or what will you tell your child when they do start walking around to help them avoid being struck?

Swaroop (01:20:07):

So the main factors are to teach them what we do in the roadway, that it's the same trends we see with teen drivers and it takes a while for them to get to a mental state as we are because they make enough mistakes and realize, oh, now I can watch for these things. And it's the same thing with younger children always. We were always taught in India when we drive on the left side of the road, we were always told, look, look right, look left again before we step up because in the time you look left, look right now that car that was really far down the road has gained a lot of distance on you. And so make sure you're glancing well enough. And if it's a point where you're at a 50 50, should I go? Should I not go? Just stop and try and make a decision not to go out into the road because we've got time in the grand scheme of things, you've got a lot of times, couple seconds, don't matter.

(01:21:15):

And so those are main things. And then even clothing, if you're going to be out riding your bicycle at night and try and improve your visibility, make yourself as visible as possible and less risk taking behaviors is definitely always going to be helpful. But we understand that with toddlers and as they grow up into the teenage years that it's much harder to do because younger children are very rule oriented, which goes to say that if I've got the right of way, the rule says I can go out. They don't believe in someone violating the rules. And that's where we come into these more difficult scenarios because now we have to teach them that everyone's not going to follow the rules. And so as soon as we get to that understanding to try and look beyond just, Hey, I did everything. I still got involved in the crash, we want to be more aware of other traffic and just pay attention and just make that one extra glance. The motorcycle studies we did, the main thing we saw was their experienced riders made that one extra glance back towards where the hazard might be coming from as opposed to the novice riders because they were like, listen, let me spend one more second to make sure that everything's clear. And I go through. So that one last glance goes a long way.

Lou (01:22:53):

Yeah, they've seen enough stuff. And then to your point where it's like, oh, well we don't have the best appreciation of closing speed necessarily, but if you look both directions and get back to your original direction, which is the most threatening direction, then if that car is in a substantially different spot than it was to begin with and you very quickly put the pieces of the puzzle together and realize that that's not a good time to go. So just take a little bit more time and slow down a little bit. Make yourself visible. I wanted to actually circle back to that Salt Lake study you're working with the pd, did they end up implementing, did they have some sort of community outreach where like, Hey, here's the study we just did in town with our citizens and here's the story, therefore

Swaroop (01:23:43):

They did. And so that was a great part because they published it in the local news and it was able to get out more people to make them more aware. So that was interesting because we did the study with where pedestrians overestimate the distance. The second thing we did was we had eye trackers on our participants and we had pedestrians wearing different kinds of clothing. And what we saw was very consistent with what we've seen in the number researchers. Someone wearing all dark clothes about a hundred feet away, you see the cross hairs of your eye tracker, just track the pedestrian on the side of the road when they were wearing dark clothes and with dark clothes. It's funny because you see the video and it's one glance and it just drives the pedestrian goes by you because you're very close to them and you see it.

(01:24:34):

But every time we had good retro reflective, so something like a class three west with reflectors on your elbows across your torso, and we had them wear the ankle reflectors about 450 feet away is where we just saw the cross as pick them up. And that's a big difference. And again, there's community outreach by NHTSA and Federal Highway where they make flyers and say, if you wear these clothing, this is how far it'll be seen. And they say Retroreflective increases recognition to 500 feet. And we agree, as long as you have that much retroreflective, as long as you have that much

Lou (01:25:21):

Pattern, the Nike emblem is not going to

Swaroop (01:25:22):

Do. Yeah, just a Nike emblem is not enough. And the third part of the study we did, and this is a video which where we were trying to, I'm sure a lot of people have seen maybe 10 years or 15 years ago, 3M had a video where they said, no white clothes at night and no white clothes at night. Yeah, no white clothes at night. And it was very interesting because everybody said white clothes make yourself more visible. And what they did was very simple. They parked a car, recorded a video of people running towards this car and they showed that pedestrians wearing all dark clothes. They were very close to the car before you saw them and pedestrians wearing white clothes. If they were on the left side of the road, you see them about 200 feet away, which seems like a lot. But when you look at perception response time at nighttime, midblock locations bringing a vehicle to a stop, it's hugely not enough.

(01:26:22):

And so we did the same video but this time with class three vests, just ankle reflectors, pedestrian wearing white clothes, pedestrian wearing black clothes, and we just put in distances of how far away you can see them and you start seeing them and retroreflective ones, you see them as soon as the video starts because you've got movement, you see them coming towards you, the pedestrian white clothes, again, you'd really see them in the video about 180 feet away. I had you going across the double the yellow lines and it really goes to show you how far away you can see them. And in those videos we had them going, they were running at three and a half, four miles an hour towards the car. Now you just change that ramp speed up to like 45 miles an hour and you'll see how quickly that you do see them, but it's few seconds where you don't really have the time to avoid these crashes. And so retro reflective, good pattern with retro reflective goes a long way

Lou (01:27:29):

And for a hundred feet might sound like a lot to people. Oh, darkly clad ped, you can see 'em a hundred feet away, but if you're going 45 miles an hour, I dunno what that is, 65 feet a second, something like that, and your perception response time is whatever it is, 1.7 seconds, it's over. You're not avoiding that.

Swaroop (01:27:48):

So your average driver in that case just hits the brakes as they're hitting the pedestrian and everyone just even slightly slower than the pedestrian is not even getting there.

Lou (01:28:04):

One real quick tactical question, since you work on these cases a lot, what are your go-to papers for ped walking speeds now?

Swaroop (01:28:13):

So pet walking speeds, there's two main approaches. We've looked at this. So there's studies that were done in controlled environments. So they had people of different ages right from young kids go in to see how fast they walk and they told them, Hey, walk faster. You see how fast you go. And so those are our controlled environments, so always use that as your baseline. So if you've got, for example, knob block, he looked at kids from three years old right up to people who were in their seventies and he recorded how fast they walked. So that gave us a baseline for if you're really tall, how fast you go, if you're really short, how fast you go if you're slightly obese or overweight, how fast walk. So use that as your baseline or typical number and then adjust for your environmental factors. So are you crossing in a midblock?

(01:29:12):

Are you crossing against a walk sign or when it says don't walk? So if you're going against that, the main trend we've noticed is if it's an uncomfortable situation somewhere deep inside, you probably know you shouldn't be crossing. They choose to walk faster. So against a don't walk sign crossing in a midblock location when it's raining, when all of these situations you tend to walk faster and we know how much faster, like 20% increase, 30% increase based on the situation you've got. So account for your pedestrian and then account for the environmental factor. And what that does is it gives you a good estimate for what your typical pedestrian has done in this similar situation. Now is that exactly how fast your pedestrian walked into this road? We don't know, but we are using a number supported in research for what your typical pedestrian does in that case and it helps you simplify your analysis and gets you to a closer number than you would've if you just used 4.2 miles hour feet per second, which all manuals tell us to use.

Lou (01:30:27):

I don't do ped cases as I mentioned before, so I don't know the answer to this and I probably should, but in response, is there an algorithmic approach to establishing what a pedestrian speed is likely to be?

Swaroop (01:30:40):

It is, and it's pretty much how I explain it right now, which is find your typical speed and then adjust for your external factor. And the more we talk about response, I realize that we are putting our methodology into response and we want to be able to keep it simple, right? And you want to be able to plug in your numbers for your crashes and so it tells you what studies looked at for walking speeds and how speeds increased based on rain, based on snow. If you're someone who stops at the median and then continues walking again or change your direction midway when you're crossing, it's got all those scenarios in there so you can really be very case specific and tune the studies for what your crash is.

Lou (01:31:34):

Yeah, I love that you guys. The implementation of that was fantastic. Like you said, going from Excel to that online portal, I've been enjoying it. And then as I'm taking fewer cases, I also appreciate the fact that I can just purchase it for a month and then use it on that case as much as I want and then it's like, okay, I don't have a case that needs it for a little bit. If I was taking cases like I was five years ago, I would have the annual license, that's for sure. But it's a nice flexibility that you guys offer there.

(01:32:05):

This episode is brought to you by Lightpoint of which I am the principal engineer. Lightpoint provides the collision reconstruction community with data and education to facilitate and elevate analysis. Our most popular product is our exemplar vehicle point clouds. If you've ever needed to track down an exemplar, it takes hours of searching for the perfect model, awkward conversations with dealers and usually a little cache to grease the wheels then back at the office.

(01:32:30):

It takes a couple more hours to stitch and clean the data. Time is the most valuable thing a person can spend. Don't waste yours doing work that's already been done. Lightpoint has a huge database of exemplar vehicles all measured with a top of the line scanner Leica's RTC360. So no one in the community has to do it again. The exemplar point cloud is delivered in PTS format, includes the interior and is fully cleaned and ready to drop into your favorite programs such as CloudCompare, 3DS Max, Rhino, Virtual-CRASH, PC-Crash among others. Head over to lightpointdata.com/datadriven to check out the database and receive 15% off your first order. That's lightpointdata.com/datadriven.

(01:33:12):

Okay, so I wanted to ask you some tactical set of questions again with respect to just how you handle a typical case. So I imagine case call comes in a variety of different things, whether it's a pedestrian that gets hit or somebody that hits a stopped car on the highway or any of those things. What's your first, how do you proceed from there until you get all the way to the point where you feel comfortable testifying with confidence?

Swaroop (01:33:43):

Yeah, the first step always is with us is gathering the data, get as much information as available about that crash. Something very important with nighttime crashes in those situations is figuring out the sun position, if there's any ambient lighting, figuring out were all streetlights functioning, so going through the videos, going through photographs, trying to figure out where the crash was, where the live street lights were, if they're going to have any sort of impact on this crash type. You also got So site exams.

Lou (01:34:22):

Yeah, site inspection is kind of like it's

Swaroop (01:34:24):

A

Lou (01:34:24):

default for us

Swaroop (01:34:26):

Have to go out there because if it's a nighttime, it's a no brainer. You have to be there, you have to record the lighting. Sometimes I've been told, hey, it's in the middle of nowhere, you're not going to see anything. But it's important to document because there might just be something in the background there which might make things very different. So if someone's got an electric billboard a thousand feet down the road or there might be just a truck entrance and exit, which may or may not help with information. So going out to the scene documenting what it looks like and now having the ability to video record a nighttime scene at 4K quality without any grain, it's a game changer like in a drive through. Do you

Lou (01:35:12):

Drive through and do

Swaroop (01:35:12):

That? Yeah, yeah. Driving through the roadway, just mount the camera on the roof. If I'm investigating a tractor trailer crash and some of these mounts have gotten really good, so where I can go 70 miles an hour with a $5,000 camera on the roof,

Lou (01:35:28):

No way. Oh, I got to know what this mount is.

Swaroop (01:35:31):

And so just going down the road and capture pristine video and just documenting the scene and then if possible, if it's not an crazy interstate, get out, get light meter readings at your area of impact or areas of interest. And so that's very important. Even with daytime crashes, I want to go out to the crash scene intersection crash, what does the traffic look like? Is there a blind spot for drivers to not see the incoming cars from the side roads or if what the traffic sequencing is like, what the traffic trends are like in those areas?

Lou (01:36:06):

Yeah, well that stuff's so important man. I have not had a case where I have not been to a site for a very long time. I go to because I always learn something and there's something unique about every area and the way drivers treat that area in my experience that you got to get boots on the ground and get to that site inspection before you continue. Two things. One, are you using an exec light meter still? And then what's that camera that you're using? Is it Sony a7s?

Swaroop (01:36:39):

Yeah,

(01:36:40):

Yeah, so exec LT 300, that's the camera I've been using. Oh sorry, that's the light meter I've been using. Exec does have a new light meter. It's called X Tech LT 45, which they say is more catered to LED lights. It's calibrated around those LED lights. But in our testing I've seen there's not a difference, especially in low lighting situations where we're talking about something between zero to 10, zero to 20 lux of light. The differences I'm seeing are a 0.1 luxe if that. And so between LED lights and halogen lights with both light meters. So I still prefer to use the LT 300 because it's got a backlit display and if you're out on a dark road and you can't see what your lights say,

(01:37:37):

You're going to pull your phone out to put a flashlight on and as soon as you do that, the light meter is going to pick it up. And so definitely the LT 300, Konica Minolta makes one, the T 10 which has been around for a while, four times as expensive to, but as far as the accuracy between the two, it's minimal, at least to the extent where we are going in low lighting conditions, the difference is negligible. And for cameras, yeah, Sony a7sII, we've been using these for around seven, eight years at this point. And I know they have an a7sIII that's out. We haven't seen a major difference between the video quality between the two. So we've continued to use the sII. The great thing about this camera is it's so sensitive to light where it's inbuilt, ISO goes up to about 102,000, which is unthinkable.

(01:38:46):

And then digitally enhanced ISO goes up to about 400,000, which we've never going to use, but it tells you that it's going to be really good at your 10,000 ISO, which with our old Nikons, you get really grainy videos out of that. And so the ISO is great. The sensitivity to light is great. It's a 12 megapixel cam, so everybody's like, but you get 48 megapixel cameras. And it's important to remember megapixels don't matter with nighttime because you want to gather as create video as you want. You don't want to get a lot of headlight bloom coming from the other side because that's not what we see as drivers. We see a source of light, you might see a little bit of bloom, but it's not washing out our entire field of view. So you can set it at one over two 50 shutter speed, F four is 1.8 and at 10,000 ISO and you get the most beautiful 4K videos you could at nighttime. And now I don't have to think about having to stop at an interstate. I'm going 70 miles an hour. I just pick the exact frames I need and I have my contrast gradient out so I know if the frame is calibrated to what I saw or not, and it makes life a lot more easier.

Lou (01:40:08):

That's actually what that camera is right there is a Sony a7sII. I don't remember exactly why I bought it, but it's been one of the better investments I've ever made.

Swaroop (01:40:17):

I remember Jeff Suway was the number one propent of the camera.

Lou (01:40:21):

Yeah, exactly. And I remember I asked Suway, we, the camera that's on you right now has a Zeist 50 millimeter on it, and that's the lens that I got to pair with that Sony for doing. I think I must've been doing human factors like visibility stuff, daytime generally not crazy enough to get involved in the nighttime stuff. I bring on the pros for that, but then at nighttime I realized I put a 20 millimeter on that Sony a7sII so that I could stop down and stabilize. And I learned that from Suway. So what mount are you using with that Sony a7sII? Is it damping out vibrations or do you do that in

Swaroop (01:41:00):

Post? We try to do that in post. So unfortunately there is no mount that can sit on top of your car and stabilize the video. Yeah, I've looked extensively. The best you can hope for is to do it in post. I tend to use the Sony's IS 24 mil to 95 mil camera because I don't do a lot of local work. I mean I do a lot of local work, but a majority of my cases are across the country, so I want to pack as light as possible. So I use one lens that's going to get me, that's going to get into my Pelican case, which is always treading at like 49.5

Lou (01:41:49):

Pounds

Swaroop (01:41:51):

And be able to go out. And so just capture the video and it does a great job. You don't see Blooms, you don't see random reflections in the lenses and the whole setup. I think if you are someone who does this on a regular basis, so it's going to do at least one nighttime case a month, it's worth the investment because it does great for morning videos too, and your file sizes are more manageable than if you were to get Sony a7r. And the only thing I'd be wary of is the display on the camera itself. Sometimes it's brighter, sometimes it's dimmer than what it looks like. So what I do is I tend to calibrate my camera and then the shots I take are slightly brighter just to make sure that I'm not going back and have a darker image on my laptop. And so by doing that, now I can adjust because I can dim my photographs and not lose detail, but I can't brighten a dark image to look like what it was and still expect to have the detail there.

Lou (01:43:00):

Yeah. Yes, we got, I can't remember exactly why, but an Atmos Ninja, which is a monitor that you can hook up to any camera, but we've hooked it up to that Sony Alpha at times, mainly for Lightpoint production stuff, not casework, but that lens is meant to be a monitor, so you could set it up accurately, not that lens, that monitor and it's not that expensive that thing's been one of the better things I've ever bought. And you can just put a hard drive into it. So you're recording directly on the hard drive. Of course that's probably not something you want to attach to the top of the car. You want to keep that assembly as light as possible. I interrupted you with the technical question. I was really interested in your camera set up and all the tools. So you'll get out, you'll do the light measurements, you'll do the drive-throughs measurements or you leave that to the recon.

Swaroop (01:43:50):

So as far as physical measurements, lane position, usually by the point I'm getting involved in a case, the recons done the legwork or I make sure that when they're going out and I'm in the case that I'm going out with them. So I let them do their work because the main lesson we've learned is the recon specialty is doing the reconstruction and ours is the human factors. So we can build on their expertise. So we've always set the recons doing everything post impact. So based on where the vehicles ended up, they come back and get to the point of impact and analyze EDR. Then we make sense of what this data is, what the speed choices were, what the braking behavior was, what the perception response time is in that specific case. That's been our approach to let them do what they're good at and us do what we are good at. And so we are both not testifying to the same thing. And so it helps us create a good barrier between what they have done and what we are doing.

Lou (01:45:01):

So you get all the data from the site inspection, I imagine 99 times out of a 100 you have some depositions from either witnesses or drivers or anybody involved in the crash. You're analyzing that, summarizing that, seeing what parts of their testimony are going to affect your analysis. And then it's time to start getting into analyzing contrast and perception response and ultimately generating the analysis. What does that part generally look like for you? Is that something that takes, I imagine it could take five hours or it could take a hundred depending on the case, but what does that process generally look like?

Swaroop (01:45:42):

So typically once we have the entire case laid out, going out to the scene, capturing the lighting, it depends on the case. If it's a nighttime case where contrasts were issue, were an issue, which is usually a case where you have lighted roads because now you don't have a uniform background or you've got to figure out if that patch of light made an influence on the driver here or driver being able to recognize the pedestrian or a vehicle parked across the roadway. We have to analyze that code through the video, go through photographs we've captured, and essentially we also collect luminance readings. So we have a Konica Minolta luminance meter, and we were able to capture how bright the object is, so how bright the shirt looked, how bright the pants looked, and we can compare to the background brightness and contrast is essentially a difference in the brightness. So if the larger the contrast, the better. It's

Lou (01:46:41):

So luminance because I always get that what you're measuring with the X tech is how much light is falling on this meter. What you're measuring with that Konica Minolta that you're just talking about, luminance is if I fire light at something, how much of it comes back?

Swaroop (01:46:57):

Yeah, so the LT 300 is luminance ILL. And so that captures how much exactly like you said, how much light is falling on the surface. Now the luminance meter where you can look through a scope and measure lighting, it's measuring how much of that light that was incident on that surface is getting reflected off that you can see. So we are looking at how much light's falling, how much of it gets reflected. And as you might imagine, darker clothes don't reflect because black color tends to absorb, white color tends to reflect. And so that's the main relationship there where the more light you have for someone wearing darker clothes, you're going to need more light, someone wearing lighter clothes, more of that light is getting reflected as opposed to absorbed. So you need less light. The luminance meter really helps us capture the effect of streetlights and then eventually your headlights involved. And then it really goes down to once we have the case, then we go in and do a deep drive into the research studies. You want to go beyond what the specific crash is. So at least as far as perception response time, we start off with the studies, Jeff Muttart 2003, 2005 study and the adjustment to baseline. So where we are able to look at that, and in the last few years I've been working with the SHRP2 database,

Lou (01:48:25):

Breaking

Swaroop (01:48:25):

Back

Lou (01:48:25):

Up, that's what we're going to get into next, which I know is crazy.

Swaroop (01:48:28):

And so we are able to figure out what the crash is. And then you have these off scenarios. So because some scenarios don't match the studies we already have, and then you start looking for situations where if speed was a factor, how has it affected it? And you're able to find more studies that are relevant. And as always, you're just padding what you've already got. So if there's a small hole and the research you've got so far now you're making sure you cover that up and accounting for the differences. And then the next part definitely is analysis. So bringing all the numbers in. How I like to do my cases is essentially start off with talking about what the average driver, what the typical driver has done in this crash. And I try to stress on has done or talk in the past tense because I'm talking about everything that's in research, which is history, and you're trying to rely on historical information.

(01:49:34):

And approximately 80% or 75% of my reports talks about what drivers have done. And then at the very end I talk about what did your pedestrian driver do? And then tie them into the research and say, Hey, this is what typical drivers do now, where is your pedestrian? Where is your driver fall in this bell shape of responses that we've seen? And because what that does is it takes my biases out or as soon as I see someone's over hours or as soon as I see someone has a b, C of a 0.08 or more or less, I don't want to go in with my opinion for that, oh, he's worked over hours, he's going to be fatigued. He has alcohol in his system, so that caused the crash. Or he had cell phone hands fee, he was on a hands fee call at the time of the crash. That's what caused the crash. Because you can have someone who blows a 0.22 and respond as fast as anybody else, just depends on everything else they did before that. Does intoxication affect speed choice? It does. They might choose to drive faster.

(01:50:54):

Does their yielding behavior to red lights or other signs go down? It does. So instead of focusing on what alcohol may do to a person, I want to look at what it did to this guy. And the best way to do it is look at perception response time. And I get calls from police officers who are investigating criminal matters and they say, Hey Swaroop, I've got this case where this guy is a 0.23 and he's going like 60 in a 30, but he gets to the brakes in half a second, right? You tell me you start the clock when the pedestrian walks into the road and this guy's on his brakes like half a second after that, what am I to do in this case? I said, let go of the perception response time because you are going to get asked to understand did he respond fast?

(01:51:47):

He took half a second. That's superhuman, isn't it? I'd say, yeah, yeah, I agree. Right? Just go back and say, I agree that his response is stellar, but he put himself in a situation where now he has to lock his brakes up to try and avoid this crash instantly. And even at that, it's not possible. Had he been going the speed limit, the crash probably would've been avoidable had he been following the traffic signs, the case I got called on, he said, oh, he blew through a stop sign and crashed. I said, well, that's your answer. Most drivers don't go through a stop sign at 60 miles an hour. Is his PRT perfect? Oh yeah, it's spectacular. But does he need to have a PRT? Not really.

Lou (01:52:36):

Yeah, that's not the topic of discussion there. Yeah, I always find that interesting because I've had the same experience. I haven't worked on that many cases. There's been a drunk driver or somebody blowing over the legal limit. But in those cases, the attorney inevitably wants to say, well, surely they had a bad perception response time and that's going to contribute to this. And I was like, well, we got to analyze that and figure that out. And then you'll need a human factors person to talk about what effect that had. And it heads up it might not have been anything. It might be a non-issue. And if it's not causative in any fashion, then why do you really care? So you mentioned the two data SHRP2 data is so cool. So I guess just give people a little bit of background on the SHRP2 data, how it came to be and what kind of data we get. One of the coolest things to me is we get video of the drivers in this case. And I'm also curious now that we're talking about DUIs, is anybody drunk in the SHRP2 data? There is, yeah, there because they're volunteers. They

Swaroop (01:53:35):

Are. And so SHRP2 is one of the most extensive studies that's been funded by Transportation Research board and other government organizations. And it's extensive because they managed to get six universities across the country on board, and each university hired participants. So they ended up with 3,500 drivers, a little more than 3,500 drivers and had them drive for about three and a half years with an accelerometer, A GPS, an OBD two port recorder and four cameras mounted in their cars and drive around as they normally would. They paid them I think $500 or something like that and said, Hey, you're not doing much. Just drive your car as you would normally day-to-day life. And the system's going to do everything. If there's a situation where you're involved in a crash itself, records it, the next time you're connected to your wifi, it's going to upload everything.

(01:54:47):

If there's a situation where you think, oh, this is interesting, just press a button and it's going to record it for us. And so they ended up, I think with two or three petabytes of data over the three and a half years. I don't even know what that prefix was. Petabyte. Petabyte is like 10,000 terabytes. Oh my gosh. Oh no. A thousand terabytes. And they had so much data, and it's so much data because what they got was they got around, I forget about 2000 crashes, holy cow. And they had 13,000 near crash events. So something which required drivers to lock their brakes up or swerve in an emergency manner. And apart from that, they also just randomly sample data because they wanted, Hey, what are people doing when not involved in these emergency events? And so it's a lot of data and

Lou (01:55:48):

Dream for a human factors

Swaroop (01:55:51):

People. Oh my gosh. And I'm surprised not a lot of people use it. There's a lot of research people are using that for, but I would think everyone's should be using this because it's real world data. And to your point of well, they know their subjects and they're still intoxicated. It's funny because what the researchers who did this study and Virginia Tech Transportation Institute, they kind of headed this study that's where all the data is stored. And what they tell us in their presentations is the threshold for when these guys started becoming unaware of that. There's a camera in your car recording, everything you're doing was right about the mark of one month because at one month mark, what they used as a good variable is personal tasks other, and these were things like picking your nose, picking your nose, number one, picking your nose, going back to eating, going back to using cell phones, going back to being intoxicated.

(01:56:55):

And we've seen videos of these guys driving drunk who's trying to get home and he's just all over the place, come on man. And then finally ends the video and he hits the curb. And that was their safety critical event. But we have create data from real world, and there's a plus and minus to this because now if I'm investigating a specific crash type, it's not going to be as controlled as if I was to do it in a simulator or on a track because I can control for as soon as I'm three seconds away from this intersection, I can trigger my car to come in and I can know for a three second event what my PRT is going to be. But the advantage of SHRP2 is that it kind of tells us a little more about, Hey, you know what, trackers are not always going to see a three second event.

(01:57:50):

It may be a half a second event, it may be a three second event, it may be a five second event. And so there's a little more variability, but you can account for that by putting them in little groups and analyzing each one. And when I started working with SHRP2, our proposal to the university who you sign an agreement with says, we said, give us all your crashes and your crashes. And we thought, yeah, this is going to be easy. And as soon as we got the database, and as soon as I wrote my first study with left turn across path, I thought, I think I'm going to be doing this for the rest of my

Lou (01:58:27):

Life. I was going to say it's the equivalent of a forklift showing up at your office and you had no idea, but the data equivalent, you're like, oh, I ordered that, whatever it was, some industrial supply and you see a tractor trailer roll up. So I was curious about the logistics of that. First of all, how do they get you the data? Are you accessing it via the cloud? And then what does the platform look like to the point where you can actually sift through that in a meaningful manner?

Swaroop (01:58:56):

So they've done a really good job at parsing the data. So they had university students who volunteered in the labs, they got paid to work in these labs, and they went through every single safety critical event as they recorded it. And they were able to do an early level of coding for every one of these events.

(01:59:18):

So they categorized it by crash type. They had demographic information from before, so they knew if it came from this car, it's got to be this participant. So this is how old they were, this is how, this is the agenda, this is their driving experience and all of that information. But apart from that, they had people painstakingly sit through every single one of these videos coded for when impact is coded for what reaction was coded for breaking. And then they integrated that with all of the onboard system data, which is the braking behavior pedal position, accelerator meter position so much. And so when we get this, so we were able to ask them for specific crash types where you can filter it down to a level which is more manageable. And once you have that, the number one thing we realized is the coding from these in-house coders for the videos could be off.

Lou (02:00:21):

Yeah, you guys would always, you've obviously done your fair bit of coding and you guys would always do double blind, right? We did.

Swaroop (02:00:28):

And so we had at least two extra coders who look through the SHRP2 data, every single video, look through it, frame by frame, and basically code in our variables. Because I think number one thing to remember is the people sitting in the SHRP2 labs coding initial information aren't looking at it, is with an eye that we probably would because we are trying to be a little more specific. And the variables, what they cared about might not matter to us. So for example, they looked at facial expression for a reaction time, and I looked at the facial reaction times they coded in, and I looked at break response times that we coded in. And what we found was facial reaction, not the best variable because you have some drivers who are just stone cold who are breaking at 0.8 Gs and not a twitch in their face that

Lou (02:01:27):

Yeah, you got iceman out

Swaroop (02:01:28):

There and you've got people who are freaking out way before they even get to the breaks. And so there was a lot of variance there. So we relied on our coding of the videos, integrating it with the onboard system data, and we were able to break the numbers down. And so we can validate these numbers with two coders. And so that's where a majority of the time goes is

Lou (02:01:55):

You're not doing that anywhere else because unless I'm wrong, I've done my fair bit of research and I've done research with Jeff and there's no money in it. You're just doing it because it's an interesting thing and it'll help the community. And you publish some papers and it feeds response, but it's not like responses pulling in 20 million a year and you got to have some interest and some passion and get involved with that.

Swaroop (02:02:18):

To have me sit through and watch, the first study I did was three and 50 left on a cross path videos and look at it frame by frame. It is rewarding because now I have more understanding of what a driver is doing on the roadway. But like you said, it's more of a personal growth, personal education scope of things. Because now when I am on the stand and I'm testifying, I can talk about, Hey, I watched 350 drivers go through this and I know what they did. And it helps with the teaching because now I can communicate this information and obviously that drives the accident reconstructionists to consider us more when they realize it's beyond probably their scope of expertise and there's some nuances, then we come in and we've help fill in those gaps. So yeah, doing research is all self-funded and we do it because we have a passion for it.

Lou (02:03:17):

Yeah, ever since I got out of grad school, I haven't been paid to do any research. Think about going back. No, that's nuts. Like you said, I am trying to figure out how to exactly articulate it, but the feel that you develop and the skills and the understanding of a crash that you might not even be able to articulate, but you bring to teaching or consulting. I can't even imagine watching that many crashes, coding that many crashes, naturalistic real crashes, which is all of our dream man. That's the best. All we have on the motorcycle study side for that, I think as Virginia Tech was involved in this too, is the MSF 100 study. So I do have access to all that data and now I'm thinking we were talking about this morning in the motorcycle class one, those things were not necessarily analyzed in a way that's super beneficial for reconstructionist.

(02:04:09):

To your point, we have to look at the data and relate it to reconstruction, which is we're most interested in what is the average acceleration achieved by a motorcyclist during their entire braking event, kind of how we're going to analyze speed and anyway with AI now I'm like, I could probably very quickly go through their coding system and the data that they provided to me and pull out the key ones that I want to analyze, put a little bit more of a closer eye on that and do some work there. Because naturalistic braking data for motorcyclists is slim right now. And what does exist has me very suspicious because it's not consistent with what I'm seeing in the work that I do. So anyway, that's my next paper. I think that there's a lot of work to be done there. And the natural of that paper you were talking about that ltap left turn across path, one of the coolest things about that, and I know you and I had emailed about that several years back, is looking at the braking behavior of these drivers, which we've, I mean I don't know if we've ever had that data before, but we started to see what braking rates drivers achieved, how long it took them to get there, and it was a little bit disappointing what drivers were achieving.

Swaroop (02:05:31):

Yeah, I think it's to do with a lot of human behavior drivers don't necessarily do what we want them to do, and I don't blame them for that because they're human and that's what they're going to do. But yeah, we looked at what's the highest threshold of braking drivers we're getting to, and we had a hundred percent hit 0.4 Gs because that essentially is the trigger for these events to be captured. Oh,

Lou (02:06:01):

Okay.

Swaroop (02:06:02):

Either 0.4 Gs of braking or swerving at about 0.3 Gs is their threshold,

Lou (02:06:08):

Which is your average parent that's late to picking up their child from soccer practice, just stopping at a stop sign probably.

Swaroop (02:06:14):

Yeah. So every time they had that, so we had a hundred percent head point for geez, which is a point where you could get significant deceleration. And we were interested in looking at how long has it taken them to do that. And we saw the average time for someone to get to 0.4 Gs has been about a third of a second 0.3 seconds. And beyond that the ramp up from zero to 0.4 is about 0.3 seconds. And then the 0.4 to your 0.7 0.8 gs, it's a 10th of a second at most.

Lou (02:06:49):

Oh, okay.

Swaroop (02:06:50):

Yeah. It's not a slow ramp up, but once you have that initial ramp up to get to heartbreaking, then it's pretty fast. And that's one of the things what break assist systems try to do is we've realized manufacturers have realized for us to get from here from 0.0 GS to 0.8 Gs, it takes us more time than what the car is capable of. And one of the classes I taught reconstruction is like, Hey, I can do 0.8 Gs, I can put a very calm in my car and I can do 0.8 GS and a 10th of a second. And I said, yeah, you can, but will you? And if you are in the roadway, right, so if you're on a test track, I can put this stuff in. I know exactly when I have to hit my brakes and know exactly how hard I have to hit my brakes. But in the real world, nobody's hit their brakes that hard. Probably once or twice in four or five years of driving is where they probably get probably leaving tire marks in the road. And so we are almost afraid of the unknown. If you train someone to do it repeatedly, they can do it every single time, but no one's training us on avoiding crashes. We are just learning from the real world. And when you look at that, when we looked at how many drivers got a 0.8 Gs, it was like 30%.

Lou (02:08:11):

Yeah, I wrote that down. It was so nuts to me. It was exactly that, man. Good memory. 30%,

Swaroop (02:08:16):

30%, 30% of them got to about 0.8 G

Lou (02:08:19):

And 58% got to 0.7. And I was like, damn, because if I'm doing an affordability analysis and I haven't done this for years, but before your study, if I was doing an avoidable analysis for a driver driving a 2020 Mercedes-Benz with a BS, I would all day put seven five in there all day, probably 0.8.

Swaroop (02:08:39):

And I do that too. I still use a 0.7, 5.7 Gs because I'm giving the benefit of the doubt. I'm being more conservative to see what's the fastest someone can avoid it. But what I try to refrain from is using what the drax let say, because some roads might read like a 0.85 or sometimes even slightly higher than that brand new roads. And while yes, can you achieve a 0.85, maybe slightly higher, do drivers really achieve that in real world driving? They don't. And we've seen that repeatedly. And so even something as simple as how hard I hit my brakes is a human factor. Totally.

Lou (02:09:20):

Exactly. How much pedal force do you have? And I've seen that with novices. My brother crashed a lot when we were growing up and I was in the passenger seat and I remember one time specifically, he rear ended a jeep in front of us and I was the one who saw it, which happens a lot, which is an interesting phenomenon that the passenger is not tasked with all of the driving responsibilities. So they see stuff sometimes before the driver said something to him, we definitely had time to stop, but he was like a 17-year-old driver or whatever he was, apply the brakes aggressively, but not probably 0.5 or something. Whereas that car would've been able to do 0.8 all day and it would've either made it less substantial or non-existent. But what percentage of people have ever gone out in their car and hit their brakes as hard as possible? So few. And I think we all should.

Swaroop (02:10:11):

Yeah, we should, right? And it should probably be a part of driver training. And because to learn how to hit your brakes really hard, that's something that's important because we don't use the full capability. And I think that's where, like I said, brake systems help us. It realizes there's a forward collision warning is on and it's easier hitting the brakes. So it knows you're trying to break for a hazard and it just takes over and says, you know what? Lemme take over and let's try and avoid this crash because even me having been doing this 10 years, I don't think I'm willing to lock my brakes up or go full braking when I'm coming up to a car because now I'm also afraid of the car behind me. Is he going to crash into me if I break really hard? So there's a lot of things that we think about before we break.

Lou (02:11:04):

No, it's so true. That was really interesting. That was the first paper that I had seen where now we started to quantify what percentage of drivers achieve what braking rate and that's super important. So you answered the time question. So if it's like 0.3 seconds to 0.4, so it's like 0.4 seconds to 0.7, 0.8, somewhere in there. So that's not a ton of time, which is good. That's the other paper I want to do with. I have all the instrumentation here, I just haven't spent the time, but breaking latency for motorcyclists because I think that's such a cumbersome activity that I feel like we're going to have a pretty big latency, but nobody's really studied that right now. There's one guy, I think it was Kasanicky out of Germany who just hopped on a bunch of different motorcycles himself and did the testing, but I'd like to see how a whole group of riders respond because the human is the most important thing there. So you kind of touched on a little bit, but how do you recommend that a recon models the braking behavior of a normal driver based on what you saw from that sharp data?

Swaroop (02:12:06):

So as far as modeling, we've always talked about this and I have conversations with recons just about this. How nuanced are you trying to get? And so our typical methodology is at least until you get from foot off throttle and get to brake application, it's a negligible track off the roadway in itself where you're not losing that mid speed and it's half a second. So even if you are, it's a fraction of a mile an hour that you're probably losing the next three tenths of a second for a passenger car. Again, the three tenths of a second is for passenger cars for heavy trucks, I try to rely on the recon.

Lou (02:12:55):

Yeah, those papers get deep quick.

Swaroop (02:12:57):

And so they'll tell me on a bus it's like 3.4 seconds on a tractor trailer, it could be as high as a half a second, and so it's slightly higher there. And then so for the next half a second, you could either choose to ramp up from zero to for the next three tenths of a second, choose to ramp up from zero to 0.4 Gs and then everything after that, so we are talking, you're at PRT and once you're at PRT, at the 0.4 Gs, just go heartbreaking 0.75. And that should cover almost every aspect of response. But if you want to be really breaking down the numbers, yeah, sure, go for half a second of 0.01 drag and then look at exactly a great ramp up curve up to your heartbreaking and then go ramp it up again. But you might end up with a difference of foot or two

Lou (02:13:58):

That doesn't make a big difference

Swaroop (02:13:59):

And you're looking at humans and a foot or two is it's in that variance of just normal responses

Lou (02:14:07):

And how accurately we can measure the tire mark to begin with. Where did braking actually start? So say somebody is analyzing a crash where they determined that if the driver braked at 0.7 Gs, they would've been able to avoid the crash and then now they're going to depot, they're going to trial. The opposing attorney says, well, that's all great. Well, but 52% of drivers won't even achieve that. So you're asking my guy to achieve that, but half of drivers can't or haven't shown to do that in the sharp data. What's your recommendation to a reconstructionist who's presented with that argument or a human factors person?

Swaroop (02:14:45):

I still look at what the affordability is and because now the difference is a majority of the time spent in that response is still going to be PRT, especially at higher speeds. And then just breaking is a part beyond that, which is beyond what your driver has done. And so we can talk about someone breaking slower, but that's what they break that. And for example, if I've got a crash where I am analyzing it at a 0.7 G, but my driver only gave me 0.5 Gs, I'm not going to blame him for that.

Lou (02:15:22):

That's the yeah, he's still within that normal

Swaroop (02:15:25):

Data what a typical driver has done. But my 0.7 G number is always going to be about what a majority of drivers have done. So more than half the drivers are able to achieve that threshold. And even with SHRP2 where we had scenarios where driver swerve for example, you didn't have as high a braking threshold because you're not going to get 0.7 GS and be swerving at a 0.3 or 0.4 peak because you have to remember that if I'm taking a little away from that, I'm going to add it there. And so we try to balance these factors out. And so I'm still looking at best case situation of what a typical driver has done, and again, this is what drivers are doing in the roadway and not what my driver did.

Lou (02:16:16):

And you have those numbers. So it sounds like you attack it from a very statistical perspective where you're just like, well, yeah, this is what most drivers do and is 0.5 outside of the realm of normal or whatever you just it I guess statistically, is it one standard deviation away from the average? Yeah, it's about that. So we're kind of in the heart of it, just attack it from a factual statistical basis and then let the judge and jury do whatever they're going to do with it.

Swaroop (02:16:40):

Right.

Lou (02:16:41):

Yeah, that makes sense. That one struck me, man. When I saw that data, I was quite disappointed in us humans. I'll tell you what, I'm like, come on man. The average in the naturalistic data for motorcyclists is really bad as well. It's like more in the 0.4 range. We did the study in, I think it was published in 2009 with Muttart, and we had like 32 riders on a Kawasaki that's fully instrumented bike can pull one G all day. The average is 0.44 Gs on the braking events. And I'm like, man, that must've just been a weird anomaly. And then the MSF 100 study came out most of their data, I think there was 30 or 40 events that were near crashes. Those are tough to analyze because you're like, well, did they just implement the braking that was required to avoid the crash? And then they called it good. So if we see a 0.4 there, it doesn't necessarily mean that much to me, but then when you get into the crash events and there's only five of those, so it's very skimpy, but they're still at that point for even when they're going to crash. So there's a lot of work to be done there, but man, how many fewer crashes would there be on the roadway if people were hitting 0.8 all over the place?

Swaroop (02:17:56):

A small fraction lower I think

Lou (02:17:58):

Small fraction, A

Swaroop (02:17:58):

Small fraction.

Lou (02:17:59):

It's mainly what leads up to that in your opinion.

(02:18:01):

Yeah, that makes sense. And that's really hard to analyze. I find that that's where the opinions get a little bit more gray. Not to say they're any less valid, I think they are, but it's always a little more uncomfortable talking about, for me as an engineer, getting outside of the complete mechanics of the crash and starting to talk about, well, why were you doing X, Y, or Z heading into the situation? That's where the event really began. You put yourself in a situation where you shouldn't have had to apply 0.8 Gs of brak. And I think that it was one motorcycle safety expert that I listened to and I used to ride with a lot. And his thing was a really experienced proficient rider should very rarely, if ever have to apply all the braking power that they know how to do. And I think that's true. So yeah, I think that's No actually though I wanted to ask you this, which this could be a really long answer or it could be a short one and I'm open to either one of those, but what were your biggest takeaways from, because you published almost 10 papers on SHRP2, what were the biggest takeaways and then if there is something that was shocking to you, I'd be curious to hear that The breaking data was shocking to me, but what were your biggest takeaways?

Swaroop (02:19:26):

So with SHRP2, I was able to publish, let's see, we had left on a cross path, that was the first one. Then we looked at cutoff crashes, we looked at pedestrian crashes. When drivers are turning, we looked at turn in paths. So that's a study. The first part of the study we published before and a second part's probably coming out at SAE this year.

Lou (02:19:49):

Oh, cool.

Swaroop (02:19:51):

We have our abstracts in and then we've looked at angular crashes and the number one takeaway is, as we've been talking about before, is response time is not a single number. We've seen that depending on what crash type it is, you're going to have different response times and even more interestingly, what we've seen is even within that single crash type, your external variables are dictating what your response times are. So the number one factor we've seen in turn across path side road intruders and cutoff events is time to contact. So the longer it took the hazard to come to my area of impact, the longer my response time has been, it's almost like we have this force field around us where you're okay driving, but once that force fields, someone attacks the force field, that's when you're like, oh, I need to respond. So if we had, for example, in our turn across mat study, if you've got more than three second time to contact, from the time he starts coming sideways and comes towards me, I'm going to take about two seconds of it to hit my brakes. But if that guy cuts me off with less than a second, my response time is less than a second. And it goes to tell us that we tend to think the other guy is for the most part going to do the right thing. He's not going to cut me off, he's not going to cut me off. Oh, he is cutting me off and that's when I'm on my breaks. And so we've seen that the longer I have available, I'm going to use more time from that to try and get to my breaks to understand

Lou (02:21:40):

What's coming together it sounds like.

Swaroop (02:21:43):

Yeah. And we are just trying to figure out is this really happening or am I going to get away with this? And the interesting thing was in that specific crash type, we had cars who were turning, wanting to go into a drive through or driveway or into the side street. And if traffic was backed up on the side streets, the guy starts, his turn has nowhere to go. And so now really the time to contact is like five seconds because he starts out and he's kind of dwelling, he's trying to figure out where to go. And the response times in those events was more than three because when I'm five seconds out, I'm like, he's going to go through

(02:22:30):

And then I'm getting closer and closer and he's not doing much. And I used up all of that time because I have an expectation that driver's going to go through and I can go through without having to break, but then now I'm getting closer and closer and not much is happening and then I'm going to lock my brakes up. So dwelling was interesting and we realized that starting from a stop or not, if someone is coming from the side road never stops at the stop sign or is doing a turn across path and never stops, just rolls through response times or faster because if he's moving, if that driver, she's just coming in hot, I don't believe that they're going to do the right thing. But if they were stopped and then all of a sudden decided to accelerate, that catches me off guard. And my response time was slower. We saw the opposite trend with cutoff events because the only times they were stopped was in the shoulder. And as soon as I see them pull into the roadway like, oh, I know what's going to happen. Yeah, that's funny and I'm going to hit my brakes faster. But as opposed to if they were just dragging along, response times were slower,

Lou (02:23:45):

That dwelling issue is interesting. They think they're going to go through, whereas it's so easily avoidable if they appreciate or just take the safe assumption that they're not and they just start breaking, you could break at 0.2 GS and avoid that whole thing. So that's kind of to me, one of those situations that we were talking about earlier where you kind of put yourself in the need to break at 0.8, whereas if you understood what was going on, this is a very similar one in the MSF 100 study where somebody tries to turn left across a motorcycles path into a business parking lot of some sort, and there's a pedestrian walking across. So they stop for the pet and the motorcyclist just straight down to the ground and into the side of 'em. That's a tricky situation.

Swaroop (02:24:28):

It's interesting because we were looking at a study by Don Marshall and Tim Brown in Iowa and they had their nat simulator. So the thing that moves on tracks, it looks like it simulates acceleration, slowing down. And what they did is they were looking at connected cars. So lights flash based on warnings and lets you know if someone's going to go through a red light or not. And they had a car going 55 miles an hour divided highway, and the side road driver pulls in at 55 and they had warnings come on saying, this guy's not stopping. And they reported 90% of drivers crashed. And I was talking to the researchers and they said, Hey, you know what, if we don't get this result, why 90% of would crash? Because had they just braked at 0.2 Gs or just taking the photo off the throttle that car goes through, I wonder why they did it because if you don't get it, I'm like, but I said no, that's human response. A driver's going to do what he's going to do is just give the benefit of thet to the other driver that they're going to yield. And when they don't, I'm not going to blame them for when they're responding because it's just a natural response

Lou (02:25:47):

To what the hazard is. Do you think that we can train that out of humans or is that too deeply ingrained?

Swaroop (02:25:53):

We can. And I think Jeff's study, which looked at novice drivers and they looked at what's the best effective, most effective way. And the study heated at UMass was what they found to have the longest, how I say it, the longest time it stuck with the participants was error based training durability. Durability was error based training. You show them what the mistakes are and you tell them how to fix the mistakes.

Lou (02:26:24):

It hasn't worked with my children at all. They still leave their dishes on the counter after dinner, but I'm glad it works with drivers.

Swaroop (02:26:31):

Yeah, it's interesting because they showed them what the mistakes are and they do it. And that's the thing with commercial drivers is whenever I'm testifying in a case, I always get asked, but he's a commercial driver

Lou (02:26:44):

Human,

Swaroop (02:26:45):

He should have responded faster. He has hundreds of hours of training, but we forget what the hundreds of hours of training are for. None of it is to avoid a crash. None of it is to break at 0.8 Gs.

Lou (02:26:58):

So

Swaroop (02:26:58):

Where's this

Lou (02:26:58):

Class?

(02:27:00):

I feel like that, and that's later in my questions, but I feel like that is the number one thing that we can very quickly do to reduce crashes, severity and frequency is just help people understand all the way back to what we were talking about braking, where it's like, here's how you brake hard. Your car can brake hard. Here's how you do it to here's how to identify hazards and what to do and here's how to overpower your monkey brain that's going to tell you everything's okay in this situation, even though it's not, if we could implement that training. But granted, the only people that are open to training are people getting their license, so it's really only going to affect them. It's a long-term solution at the end of the day anyway, maybe we should stop click it or ticket on all the signs and start putting up some other things. Like you're not going to be visible to somebody at night if you're a pedestrian or it's like this training seems to be a huge one,

Swaroop (02:27:53):

Especially at the entry level. We know what good drivers do. We know what experienced drivers do. So bringing that information in, training them when they're young and pointing out where they're going to make the mistakes, that's going to be a big leap in making them safer because they're learning on the job essentially when they're driving. And it takes a while from your own mistakes to know what you did wrong. And so if you tell them before, Hey, you know what, if you're coming up to this intersection, you know what a 35-year-old does? He takes his foot off the throttle, he's not blowing through it. I get it. You have a green light, right? I get it. The rule says you are have the right of way. You're going to do everything. You can go at the speed limit, but you know what the experience driver does, they cover the brakes because they're not trusting because they've seen this way too many times where they've been cut off. And that sort of training is going to go a long way.

Lou (02:28:55):

Is there anything you want to say about SHRP2 before I move on to the next topic?

Swaroop (02:29:01):

We are still analyzing it. We have a backing study coming in. Oh, cool. Yeah, responses when drivers were backing. We've got a pedestrian study, one of the places where we've seen visual manual distractions to be a significant factor. And we've got I think one more rear ender study that's due to be processed, but high speed. High

Lou (02:29:29):

Speed rear enders. Yeah. Yeah. That's really where the rubber meets the road. I hope that manufacturers can get that solved pretty quickly. I wonder how more advanced manufacturers are handling that like GM with their super cruise. And I have to imagine that on Super Cruise. Would Super cruise rear end a stop vehicle on the highway? I wonder

Swaroop (02:29:46):

Today?

Lou (02:29:47):

Yes, it would. Okay.

Swaroop (02:29:48):

Today, yes. I think the speed threshold there is still like 40 mile an hour difference in what crashes it can avoid, maybe slightly higher, but I saw a download in a crash where they had, I think the Bendix, not the Bendix system, the Waco system where it picked up on a stop vehicle 500 feet away. It picked up that there's a hazard and then we can see the data from the download, which is slowing the gap between the two is reducing every fraction of a second. But the thing is it's not consistent and that's what manufacturers are trying to address.

Lou (02:30:39):

So you saw the data, which suggests things didn't end well.

Swaroop (02:30:42):

Yeah, because the warning that the driver got for forward collision warning that he got was 1.3 seconds before impact.

Lou (02:30:52):

Oh, interesting.

Swaroop (02:30:53):

And if you're at 65 miles an hour,

Lou (02:30:55):

So it detected it but didn't tell him about it.

Swaroop (02:30:57):

Yeah, because of the false alarms essentially. Because now it probably would give you similar information for a bridge or a sign and you'd rather have them use it for a 20 mile an R speed difference than not use it at all at any speed difference. And so until I think they can narrow it down and make it more specific, that's one of the issues manufacturers are tackling because they can pick it up but not consistently.

Lou (02:31:31):

And I wonder mean that kind of feeds into that LiDAR versus camera argument, which one of those is going to do a better job without false alarms? And I guess time will tell that I don't think Super Cruise has LiDAR. I think they map the roadways with LiDAR and then the car knows where it is, so it knows what the roadway looks like, but it's not sending and not that would be necessary because radar can tell, but I guess radar's not going to get you the shape, but that far away you'd need a hundred thousand dollars LiDAR system, 500 feet out for it to tell you what it is, 500 feet away. Otherwise it's just going to be a couple blobs of light. It's not going to, oh, that's a car. You're not measuring a car like that. And

Swaroop (02:32:11):

Even with visual systems in the first, anything beyond 500 feet, you're talking like one or two pixels and which is general vibration of the card that it's going to move around, which it might not be the best at larger distances.

Lou (02:32:29):

So maybe, I mean that brings up the idea of vehicle to vehicle communications, which I haven't heard discussed for a long time. I have to imagine that's still behind the scenes because that seems like the best way to eliminate an issue like that.

Swaroop (02:32:41):

It is. But the issue there has been what's the best way to transmit information? Do we do it over cell phone towers to we do it over just V to V direct communication. How is that affected by rainfall? How is that affected by dust? How is that affected by just debris covering your sensors? And it's been installed in some intersections. We know it's been implemented in Japan, some intersections in Japan and the Hondas and Toyotas have it in built where if you're coming up to a light around a blind curve, it tells you the light's going to be red when you show up to it. So you're not caught off guard. And so infrastructure to vehicle, vehicle to vehicle, all of that's going to go a long way. But the crashes we investigate are on rural highways

Lou (02:33:29):

Not

Swaroop (02:33:30):

There. It's a challenge. In a perfect world, it would work really well, but

Lou (02:33:36):

Yeah, no, that's really interesting. Infrastructure. Even if the highway, it's all a logistical nightmare I guess. But if the highway could sense there was a car stopped on it in the middle and then just put up safety lights or talk to the other vehicles and be like, there is a car stops on me, look out.

Swaroop (02:33:54):

And they're trying to do that with variable speed limit signs. So anytime there's a traffic backup, we've seen this in Nevada, we've seen this in state of Washington where they have temporary speed limit signs, digital signs that come up because if there's backed up traffic, they want to slow you down like a half a mile before you show up and you're like, oh, I need to break. And the phenomenon of looming, the easiest way to tell someone is have you ever driven and have you had that gut wrenching feeling that, oh, I need to hit my brakes. That's essentially what our response to that is. And by just having everyone slow down five miles an hour leading up to this backup in traffic, that goes a long way. And making sure people are not showing up at full speed when everyone's stopped in traffic.

Lou (02:34:46):

Whenever I see brake lights ahead of me, I'm driving you a 5,000 pound Tundra and your energy is exponentially proportional to your velocity one half MV squared where V is the velocity. So I'm just like, if I see I'll shave off like 10 miles an hour, I know it's going to make a big difference with respect to my stopping capability. It's like every little bit helps there. There's such this tight marriage between human factors and ADAS stuff right now between what we were talking about, your early education where you're establishing how the driver interacts with the machine, what warnings are actually going to be received in a productive fashion to figuring out how humans are going to respond compared to a machine. What is the standard that we should be using? Human is a system that was just developed by Cadillac better than a human. Well, to know that we need to know how the human responds. And I saw you had two presentations last year, one at IPTM and one at SAE on a a ADAS stuff. What you currently, what's interesting to you, I guess is the best way to say it, in the ADAS world right now.

Swaroop (02:35:57):

So a lot of the ADAS research that they've looked at, they've looked at L2 vehicles. So basically you've got lane keeping and you've got adaptive cruise control. And what we've seen so far is you're more likely to be distracted in these presence. So you're more likely to take your eyes off the road, you're more likely to engage in a secondary task while you're engaged in L2. That's me. And so also, while ADAS systems can avoid these smaller speed threshold crashes, so smaller speed differential crashes when you are in real trouble where it's beyond system capabilities, it's going to be a lot harder for a driver to take control. And that's one of the reasons why researchers don't want L3 cars in the roadway, which basically is fully autonomous. Unless the system doesn't know how to handle it, it's going to hand over control. And we've seen we are not created taking over control, especially if you are telling them everything's going to be okay, like we saw with the Uber crash in Arizona, you're going to tell them everything's going to be okay, the car is going to everything, but then when it does not do it, then it's going to be hard to take over and try and avoid that

Lou (02:37:22):

Crash. So L2 may be okay, sounds like probably reduces crashes, but in that 50 mile an hour closing speed, it's not going to do anything. It's going to want to pass it off to you, but you might be texting on L2, so you're more likely to be texting, not that you will be, but you are more likely to based on the research, it's like we want to skip straight to L four, just kind of no L three if

Swaroop (02:37:44):

You can help. Yeah, because even with L2, the thing we have to remember is it's advanced driver assistance systems and assistance is the main thing, which is it's going to assist you. You still got to pay attention, you still got to keep staying control of the vehicle. It takes a little bit of a load off because now keep your hand on the steering, it's going to steer around and just in case it misses, you can correct and take over closing your eyes recording videos for the internet to show your L2 systems. Really great. Not a great idea because sometimes these systems fail and when they do, you want to be able to take over.

Lou (02:38:26):

Are we getting to a point now where the humans are, obviously the processing time is so much higher, they don't really have a PRT. Theoretically if you're following another vehicle and it knows the distance between the two and it's not this far out problem like we were talking about before, it can just instantly react. Are we getting to a point where humans are outperforming, sorry, machines are outperforming humans with respect to safety and responses to challenging and risky situations, dangerous situations

Swaroop (02:39:00):

In some situations, and again, it's going back to a difference in manufacturer to manufacturer. I was looking at when forward collision warnings and aeps activate in different vehicles, a Honda or a Toyota might do it like three seconds out a Tesla I think not a Tesla, maybe a Tesla or a four Tesla, like 1.3 seconds out. So these systems might avoid more crashes, but drivers may be more likely to turn them off because three seconds away if it's breaking, I'm thinking something's wrong with this car.

(02:39:36):

And the other thing is comfort. Once you hand over control, if your car is breaking for every single thing, it's not a comfortable drive and again, you're more likely to turn it off. So manufacturers are dealing with what's the sweet spot where we improve safety and we make sure the systems don't get turned off. Same thing with when do you warn drivers? Again, you don't want to want them too early. You don't want to want them too late. And that's a challenge they're facing and we don't have a consensus on who wants to do what and there's going to be a difference there.

Lou (02:40:21):

Man, humans, they are. The problem with driving for sure, I've had several people in my life tell me like, well, this car is so sensitive to x, y, or Z, and I was driving one the other day, I can't remember what it was. I think it was an older Honda maybe, and I had never experienced it before, like full on autonomous braking when I wasn't actually in a dangerous situation whatsoever. But I was driving this thing just down the road and it all of a sudden braked very hard and it gave me an appreciation for the people who have complained to me about that and be like, I don't really, because I generally talk about it as a good thing. So my mom's got the Subaru Outback with eyesight and I'm like, oh, that's awesome. I'm so glad you have that. That's going to make things a little bit safer for you, especially in you're getting on in your age. And I'm almost universally met with, oh, that thing's so sensitive, I don't like it. So we are the problem. Well, they've got to fix that. Obviously it shouldn't be actually braking beeping at you is one thing, but actually braking, I could see that be a big turnoff for drivers and then them just either decreasing the sensitivity or when it's an option, turning it off completely.

Swaroop (02:41:29):

I think the main thing we've seen so far with autonomous cars or ADAS systems is they drive very much like a novice driver, very reactive, your average driver, five years of driving experience and or more. They've learned enough to know not to go full speed through an intersection, but your autonomous cars haven't learned that yet and they don't drive like your typical driver, they're going, Hey, green light, I've got right of way. And so well that same argument with old drivers, right? It all comes in where yeah, they're going to do better to overcome that brake lag effect. They're going to do better to overcome foot movement, time perception effects, but they won't be ready before. So right as it stands right now, there are a wash in most situations where an experienced driver is going to do better in some situations and autonomous car is going to do better where it's exceeding the human limitations or beyond our limits of what we can recognize. And so the autonomous cars can do really well there, but for a typical situation, I think we do some things better. Those cars do some things better,

Lou (02:42:49):

Hopefully we can get there so that the best of both worlds and marry those two things. How are you seeing that come into play? ADAS systems come into play in your reconstructions. Are you being presented with data to help you understand when the car is taken over versus the driver and kind of analyzing, I mean ultimately your job, if I'm summarizing it appropriately, is to look at the behavior of the driver. So in certain situations you've got to figure out was it the driver, was it the adas that responded? If it wasn't the driver, why didn't the driver respond? How have you seen that manifest in your work?

Swaroop (02:43:29):

We have, and we've had a few cases which involved cars that were equipped with ADA S, but the reconstructionists always tell me, it's hard to parse out which one it was. I think the easiest, some vehicles will report pedal position, which is a good indicator of who's hitting the brakes throttle percentage, and based on the position of the pedals itself, some scenarios we know we have braking, we know the system had the ability to do this, but what we've been able to do is IIHS has a really good database of each one of these systems that they've tested and it kind of tells you what the capabilities of these systems are. So if you've got driver on the brakes one second before impact, and we see IIHS, that system doesn't break until half a second. That's a good source of information to know. Well, it probably is the driver because the system's not designed to break that far out and IHS does both.

(02:44:37):

One is just ratings on their website where you can go in, look up a specific car model and it tells you all the features, but then you can also get the raw data from their testing. And that's very helpful because they test every system five times across different scenarios and it tells you how many times it failed, how many times it did well and what the average was. And it's really you got to figure out specific cars specific situation and who it was, who responded, and if it was the system who responded, we got to to see, well, in a similar scenario when have most drivers braked or if the car took over before that specific threshold, then yeah, we probably the driver tried to break, but the systems already breaking by then and slightly more trickier to analyze. But I think from a data point of view, we have enough there to really be able to figure out what's going on.

Lou (02:45:32):

And you mentioned swell rate before and that's a term that I associate with these lead vehicle situations where your same direction as a vehicle ahead of you, that's either slow or stopped and then the human has to appreciate that there is an issue and then respond accordingly. That seems to be a uniquely challenging situation for humans to detect that threshold where it is going to be become an issue. So what does the literature show there? Why is that such a challenging situation and how can we mitigate that?

Swaroop (02:46:06):

So when we looked at looming and we're looming because that's how we've realized we're moving close to things because things occupy a larger field of view as you get closer to them. Now when you're going 60 miles an hour and you're coming up on a car, that car, when you look at how big it appears in a field of view, you're going from a thousand feet to about 500 feet and that car doesn't grow that much in a field of view. And we don't appreciate that rate of change at that distance, right? About 500 feet. You might be able to tell if you're closing or separating, but you can't really tell how fast you're closing for the most part. You think you're going 65, that car might be going 55 or 60 and you appreciate that you're closing. And why this crash is so bad is because the first thing drivers have done as soon as they realize there's closing on a car is they look at their mirrors and bare glances typically take between one and two seconds.

(02:47:08):

So now you look at your mirror, you look ahead, you look at your mirror and you come back ahead and now you're 200, 300 feet closer to that stopped car, but you kind of knew it was slower, but you didn't know. Now you're 200 feet close to it. And then that's pretty much where we are saying, oh, I need to lock my brakes up. It takes you a couple seconds to get to your brakes and try and avoid that crash. So it's very challenging because it gives you just about enough information for you to start looking at your mirrors. And that just makes it worse because now, especially if you've got a card in your blind spot and you're just trying to see, hey, you're going to let me over or not, that's going to take you even longer because you're spending more time in your mirrors and away from where the hazard is. And it's just so tricky. And we really need supplements to drivers where you give the system and you say stock vehicle ahead or something like that so that you can avoid these crashes even before you're starting to appreciate that closing. And again, we run into the same issue. Am I wanting them too early to the point where they're like,

(02:48:21):

Right,

Lou (02:48:22):

It's going off all the time.

Swaroop (02:48:23):

I know you're warning me, but I look up and the car still looks the same size. What do you want me to do? And it's just such a tricky problem to solve right now.

Lou (02:48:34):

We're not equipped with the machinery ourselves. So it sounds like even the car with radar and cameras and LiDAR that might not do it either. That is a really tricky situation. So I've never done the math on that. You probably know the answer to this. Does the swell rate accelerate as the closing distance closes? As the distance between the vehicle closes? It's not just faster and faster, but it's actually accelerating that rate?

Swaroop (02:49:03):

Yeah, it's what we call an exponential curve. So initially it stays pretty flat and then that rate of growth of the card really ramps up. And so Jeff wrote a study in 2005 that looked at this well rate and looked at what the PRT is in these cases. And what was interesting is everybody has their point of view, right? Researchers say, Hey, you know what studies before said, I think 0.003 radiance per second is where drivers recognize closing, which is where they can say, is the car slower than faster than I am? Then Jeff Muttart came out with this study and he said 0.006, which is you're about if you're going 65 miles an hour, which you're about 300 feet away from it. And he said, okay, this is where you're like, oh, I need to hit my brakes. And Jeff says, 0.006 is emergency response. 0.003 is closing versus separating. And then Marcola who looked at SHRP2 data from this, and he looked at tractor trailer driver data from South Africa naturalistic, and he said, Hey, his study was farewell to reaction time and everybody sending us this study is like, oh,

Lou (02:50:21):

Oh geez,

Swaroop (02:50:22):

He's bringing, it looks like this is targeted towards you. And then we looked at the study and we were like, you know what? This is a great study because he says drivers hit the brakes about half a second after 0.02. He's saying 0.006 is too early. Point zero zero three is too early. Point zero two is where drivers start hitting the brakes. Now if you look at where drivers are really hitting the brakes, if I start the clock at 0.003, reaction time has been like three and a half seconds. If I look at not

Lou (02:50:55):

Consistent with an emergency, generally

Swaroop (02:50:57):

True, and if I look at a 0.006 response time has been 1.8 seconds. And if I look at 0.02 reaction time, although he doesn't call it that, is half a second, where are my tire marks starting all at the same point because I start the clock earlier, I use a longer time, I use 0.006, I use a shorter time, I use a 0.02. I use an even shorter time. But the beauty of the research is we get repeatable, consistent data, which is where the brake, where the tire marks start, which is very close to the vehicle, A majority of drivers do crash. Unfortunately in these situations when you have just large speed differences.

Lou (02:51:43):

So all you need to really know is the mathematics to determine the rate of expansion, that 0.003 radiance per second or 0.06. And then once you've determined what threshold you're going to use, then you relate that to the appropriate perception response time. You wouldn't want to mix and match those. Correct. You got to standardize for that and the things that make a difference. I saw you wrote a paper on taillight brightness. I know taillight width makes a difference. What did you find for brightness? Your paper also covered cell phone use, which I'm super curious about because first glance it looked like hands-free, didn't really make a difference, and then age. So how do all of those things affect the ability to perceive this swelling?

Swaroop (02:52:36):

So as far as daylight dimness, we go off of visual cues that we are typically used to seeing as we've driven and our perception or as we've learned, and in grad school we call it a mental model, and now it's just called, we call it experience in the real world, is if I'm seeing dimmer lights, it's something further away. If I'm seeing bright lights, it's closer to me. And what we did is we compared participants and we showed them a dimmer tail light, a brighter tail light at different distances. And what we found was that if I had dim taillights and bright taillights both at the same distance, everybody's, a majority of participants said the brighter taillights were closer to them than the dimmer ones were. Same thing with even if we stepped it up, which is the dimmer taillights were closer and the brighter taillights were one notch ahead.

(02:53:35):

So I think we had it like 420 feet for dim, 500 feet for bright, it was a 50 50 where they said either one could be closer to them. Now that's 80 feet so close to if you're going 45 miles an hour, it's like one second of time that I've just lost because I wasn't able to tell the difference. And so if you've got a construction vehicle that's just packed with dirt and you barely see the taillights, or if you've got really battery or reflective tape, the car batteries got issues, you're stalled and the battery's gone off and your daylights are a fraction of what they looked like before, now you're going to create issues as far as distraction. What we found was as far as reaction time or perception response time, if you are on hands free call and health call, no distraction task, all of them had very similar perception response times to the hazard when you're closing on a vehicle. Where we did see a difference or where we saw a big jump in response times is number one was cognitive tasks, but cognitive tasks are more like mental math and problem solving, which we are never going to be able to testify to in court. We don't know what the driver was doing, what he was thinking,

Lou (02:55:00):

Was he calculating his taxes in his head on his way to impact.

Swaroop (02:55:05):

But where we can really bring in information is a visual manual task like we talked about something that takes your eyes off the road and interacting with it, so changing your radio, texting, dialing a phone, all of those tasks. We saw a significant increase in response times, and we saw the same effect with L2 studies as well. So leveL2 automation studies, similar trend. If you put that L2 driver in a visual manual task, the takeover time was much slower. And so hands-free handheld is the theory is that since you're not taking their eyes off the road, you're not taking away from their spatial information. It has less of an effect on your response time in itself and age as well. Age, like we said, you might be slightly slower to respond in certain scenarios, but you make up for it in your experience.

(02:56:01):

And across SHRP2, there's five or six studies we've written in SHRP2 age gender response times were the same, never significantly different between your young driver, your average experience driver, and someone over the age of 70. And Jeff tries to keep pushing telling me that, hey, older drivers are older than every year he grows up. And so we've kept moving it one year up every time. But I think the consensus or what most researchers tend to use is like 70 plus for older driver performance. But yeah, because you make up for it in the dearth of information you've learned over the years,

Lou (02:56:41):

I think that always shocks people outside of the human factors industry or probably the recon industry. That age doesn't really come into play. It's not one of the algorithmic inputs for when you're calculating response time. H doesn't come into play, which is wild. Quick motorcycle aside, James Sloan wanted me to bring this up and I was like, well, this is a good idea. There's two things there that relate to looming, which is driver's ability to detect the closing speed of a motorcycle or time to contact of a motorcycle when they're pulling out in front of it. Based on everything that what could we do to improve? Is there anything we could do to improve that? Like the ability to detect time to impact for somebody thinking about turning in front of a motorcyclist,

Swaroop (02:57:32):

It comes down to being, again, a human limitation. Because when researchers tested drivers, they basically had them sit in a turn pocket and they had cars drive towards them, and then they had motorcycles drive towards them at the same speeds. And across the board, everybody said the motorcycle was going to arrive at the area of impact or time to contact was longer to a factor of about half a second. In some cases, it's just a human limitation where we think we have more time with motorcycles coming back to the looming issue, if it's wider, it grows faster. In my field of view, if it's narrower, it's going to be hard. So the other thing is at nighttime now you've got a single headlight or very closely packed headlights and it blends in with background traffic. So if you've got a rider who accelerates from the intersection before all the other cars around him and the cars lagging by a couple hundred feet coming up to the next light, now is there a chance that you're going to think his single floating ball of light is attached to one of the cars for the behind in the roadway?

(02:58:44):

Like the tennis ball? Yeah, tennis ball, right. And again, yeah, that's the likelihood because now I'm not going to be able to pick out that this headlight is not attached to one of those cars for the back and the roadway. And in both scenarios, again, it's a human limitation. Again, it comes down to training and trying to inform people because the number one factor you might hear from drivers is, I never saw him. That's the number one thing where we hear, why don't drivers see motorcycle riders? The true answer probably is they probably didn't expect him to be at the area of impact. They probably caught him at a glance and didn't expect them to get to area of impact as soon as they did, or at nighttime, probably just never just differentiated their headlight from everything in their background. But do motorcycle rides become invisible to drivers?

(02:59:36):

It's less likely. So it's just that we process information where I look at things and I dismiss it. If it's not an emergency, do I remember it? Do I remember everything I've seen in with my eyes? Probably not. The most emergent event in those crashes is really the bang of the motorcycle hitting your car, and that's going to sort of override everything you've seen before that, and it's got, we call it a recency bias. The thing that happened more recently, the more salient object that happened is probably going to have a more long lasting effect on your memory than something which you thought was not important. When you made that turn.

Lou (03:00:22):

Yeah, no, that's really interesting. As a motorcyclist, I'm always very interested in what can be done. I'm like, well, what if we put lights on the handlebars of every motorcycle as far out as possible to the looming point that's going to help appreciate closing speed, but then it might just look like a car that's really far away. And Wade just posted up a paper that the NHTSA funded that I've got to read more closely, but they analyzed a bunch of different headlight configurations. The results seem to be a little bit counterintuitive to me, but I've got to look at that closer. Alright, we've been going for a long time. I appreciate you. It's like 5:40 right now. We're almost on three hours. So I'm going to finish up with a couple different things.

(03:01:10):

I figured I'd have to cut out a couple things, but one is demonstrative. So you've got all this information, you've got the imagery from the Sony, you've got light readings, you've got illuminance readings, luminance and luminance readings, and you have to relay to a judge or a jury like this is the challenge that the driver was presented with. And you can of course do that by citing the literature and just being like, Hey man, this study, they're not going to see 'em until a hundred feet. Do you have demonstratives that you like to present to the judge or the jury where you're like, that's what the literature said and here's what it looks like. Is there anything you've got in the back pocket for that?

Swaroop (03:01:52):

Yeah, two things we always do. Number one is stop your video or your photographs. At the point of no escape. Point of no escape in very simple terms is where your typical driver, your 85th percentile driver needs to start responding, start going through perception response time to avoid that crash. So if you stop your video there, or if you stop your photograph from that particular frame and you show it to your jury and they see and they say, there's nothing in here. What do you want me to see? That's your answer, which is, if your driver can't see it from this point, the crash is unavoidable. Because now if you let the video run through the area of impact, now they're going to have that bias of, oh, I saw the pedestrian, but they can't seem to figure out, can I stop my car in that distance? That's a little harder for them to figure

Lou (03:02:52):

Out. Yeah, you want to inject your analysis there and just be like, this is the time it's all done. And then do you eventually show them the whole video or do you not even need to?

Swaroop (03:03:02):

We don't need to. The other side probably is going to, yeah, but it's your goal to emphasize and say

(03:03:09):

You've seen the entire video. But where we stop the video is probably where the crash is unavoidable for a majority of drivers. The second thing we do is calibrating nighttime photographs. Daytime, we don't have that issue as much, but a nighttime photograph, what we use is the contrast gradient board. So it helps me make extensive notes. When I'm out at the scene, I write down every single contrast. I can see what direction it's pointed in and when I put it on my computer, now I know if my image is brighter than what I saw or not because I'm saying, oh, number five, contrast was supposed to be pointed to the right. I don't really see it. My image needs to be brighter. And if you can't use a contrast port, use some form of a reference. There might be a billboard out there where you see some things and you don't see some things.

(03:04:00):

You can count the retro reflective dash lines in the road. I was able to count four when I was at the scene. My video shows six, so my video's probably brighter than what I saw, so I can account for that. So extensive notes when you're at the scene for what you saw, and try and match it as much as possible to when you have it. And we say you can use this in court, but no judge or attorney is going to allow you to change the settings on a monitor or a projector. So have printouts. So what we do is I change the EV values from my calibrated photos, print them out at three or four levels, and then again, go back to my notes because now printers are going to be different and I want to match, get the best photograph. And so once I've done that process, I narrow it down to the photos I'll use as demonstrators and then just hand them out so that they can hold it and see at a specific distance and no, without having to worry about what the 10-year-old projector is doing or monitor

Lou (03:05:05):

Is doing. Yeah, no, that sounds brutal. I was talking to Sue way about that when he came on the podcast, and I think he told me he had one instance where he had hours to get everything ready and prep and calibrate the AV in the courtroom every other time. He's not allowed to do that. So it's tough to show them unless you print it like you said. So one of the most daunting parts about this career path, and I'm talking specifically about your consulting work and the fact that you have to testify sometimes is the testimony itself. How have you found that and what advice do you have for younger engineers that are coming into this and they're going to be exposed to those trials and tribulations?

Swaroop (03:05:52):

I thought I was young and way to make me feel old.

Lou (03:05:54):

Yeah, exactly. Not anymore, Swaroop! Not after a

Swaroop (03:05:56):

Decade. Yeah. So the number one thing I've kept reminding myself, which has helped me do really well at depositions is to remember that I am the expert in the room. And same thing goes for everyone who's going to testify that you are the expert in the room and more than anybody else who's out there. And so as long as you remember that and remember that you are there to educate the trier of fact. So the jury, the judge, you're there to give them or empower them with information so that they can make an educated opinion. So you're not trying to defend anyone. You're not trying to provide opinions on who caused what. My goal always has been, this is what everybody does, and his response is pretty bang on with what everybody's doing. So I can't really blame him for what he's doing, and the jury can decide and say, oh yeah, he's giving me a comparison. If I just tell him his reaction time was one second compared to what? And so make sure you're educating. Yeah, just remember to educate the jury. The opposing attorney is going to try to intimidate you. They're trying to go make, they're going to try and make your look analysis bad and that you're biased. Don't let any of that bother you. No, you are the expert in the room and just support your opinions and you'll do a great job.

Lou (03:07:28):

I think. Yeah, like you said, that helps a lot is one is like I always tell myself I'm here to help. I'm here to help everybody understand how this happened and that, because they'll try to paint you every time as somebody who's not here to help, you're here to advocate. And it's like, no, I'm not. I'm not even going to tell you. Like you said, who's at fault? That's your job. So anytime somebody asks me like, who's at fault for this crash? I'm like, you kidding me? I'm an engineer. That's not my job. My job is to tell you how it happened. And then the judge of the jury, he couldn't pay me enough to tell you he's at fault. I'm here to help. I'm here to educate. And teaching helps a lot with that. I know not everybody has that opportunity, but for me it's been very helpful.

(03:08:05):

And I have to imagine that you found the same thing because when you're teaching in front of a group of 30 colleagues, they're going to bring things up that they don't understand. They're going to say, Swaroop, I don't get that. Or Can you expand on that? So when you're presenting to a jury that unfortunately doesn't have a voice 99% of the time, I wish that we got more questions from the jury. In my experience is very rare that the judge allows the jury to ask the expert questions when it happens. I love it, but you kind of already have an idea of what they might be understanding or not understanding based on teaching hundreds, thousands of students over the years. So if you can teach, I highly recommend doing that for a lot of different reasons. But yeah. That's awesome. Thanks. So okay, safety. I guess I'm going to ask one safety question.

(03:08:52):

I don't want to hold you too long. I know you got traffic to contend with and things, but one of the things that always interests me, and you're more on the safety side of this than I am when I'm at parties and stuff, people hear what I do and they're like, oh, cool, you're in safety. I'm like, I wish I was, but I don't help anybody be safer right now. I just analyze crashes. That already happened. But as you know, we have this huge community of investigators that know a lot and have analyzed a lot of crashes. Do you see a path towards turning that information into actual safety measures that prevent crashes in the future?

Swaroop (03:09:29):

We do, because I think the advantage that we all have as opposed to researchers who might be running very specific studies is we've been on the ground. We've seen what drivers do too. It very seldom matches with what we want them to do. And so we've tried to, every time we are looking at a specific crash type, we put out a note outside and say, Hey guys, if you've got a crash which involved head-on vehicles and you've got videos from these crashes, just send us this information. Or if you've got EDR downloads from this specific type of a crash, we share it with us. And currently doing that with headlight maps, so we tell you how we've mapped our headlights and if you follow the same protocol, send your headlights to us. And it just helps us add to our robustness of the database because now we can make numbers more consistent and rely on larger sample sizes.

(03:10:30):

And so there definitely is a lot of scope for where we bring in. So we've been trying to work with different research institutes, transportation institutes over the last few years because we've been trying to bring in the perspective of what we see on the roadway. I worked with the ASTM committee for retro reflective sheeting where we're trying to come up with a standard for how to maintain your tape and what quality to keep it at because there's manufacturers on the committee, there's safety professionals in the committee, but we are trying to bridge what the safety aspects of it is and what the manufacturing advantages of it is. And so that they know that safety is not really only litigation, it's also contributing towards saving lives, even if it's through the process of litigation. If people are going to make changes to their systems for overall safety, it's a big win for us because ultimately we're trying to save lives. And so I try to volunteer my time with them. I work with Transportation research board, helping peer review research studies because we bring in a very unique input towards what researchers are doing, and we try to help them narrow it down to what's happening in the real world.

Lou (03:11:50):

Yeah, that's key, man. That's part of my goal too, is to take all this information and turn it into even one saved life. That would be a saved life. That would be amazing. And like you said, there's ways to do it. And we're working on that right now with Lightpoint. I'm very keen to, and I reached out to Jeff about this, to find organizations that are really kind of breaking the mold with respect to teen drivers and training teen drivers, and then contribute some of our profits toward them. I would love to be able to do that. If I knew it was going to be used in an effective way and it was going to create better drivers, I would a hundred percent put not a hundred percent in my profit, but I would a hundred percent put some of my profit toward that, especially as my kids start to enter that realm. I've always thought it was important, but now my kids are a few years away from driving and I want to have the best programs available to, I will be spending a lot of time in the passenger seat and giving them some pointers. Hopefully they're well received. As any parent knows, you're generally not the best person to teach your own child. It's good to have a little bit of a level of separation there. But anything else you wanted to bring up or talk about before we start shutting things down?

Swaroop (03:13:17):

So I've been doing this for a decade now. In November, it's going to be 10 years. And I remember the very first time I met Jeff in person was at SATAI conference 10 years ago, and he said, come in, you're interviewing for the job, but I think you'll like this class. And I literally asked everyone in the room, so is this what you all do for a living? And I'm very happy that I'm going to be speaking at SATAI this year. And so it feels like a full circle movement where I started my career. Exactly. 10 years later, I'm going to be teaching that class. So if you're southern California, southwest United States, it's a three day class. We're going to be doing some hands-on nighttime testing. So we're going to go out at night in the parking lot and measure cars and pedestrian clothing. And yeah, that's something I'm looking forward to. And yeah, if you're around, I'd like to see you all there.

Lou (03:14:15):

They have that huge area too. I think it's Glendale PD. They're super helpful with SATAI and it's like an amazing facility. The teaching facility is amazing. I dunno if it's a runway or what, but they do all sorts of crazy stuff. There is a runway close by, I know, because you couldn't fly drones there, but it's some big wide open space. It'll be perfect for that. And how cool is that, man? Full circle. I think the first, I'll say national level presentation I ever gave was at SATAI and it was right after doing research with Jeff in 2007 or o09, somewhere around there. And I was begging Jeff to, you just do it. I don't want to do it. And he's like, no, man. They've heard enough from me. Get your butt out there and do it. And I'm glad I did, but I love SATAI. It's a great organization. They always bring great material to the conferences and great presenters. It's organized really well. The facility's great, so that's awesome. I'm going to try to, I'm not, not too far from there. It's a day's drive away. So hopefully I'll see you out there. And thanks again for coming out, Swaroop. Oh,

Swaroop (03:15:21):

Thanks for having me. This has been a real pleasure.

Lou (03:15:24):

Hey everyone, one more thing before you get back to business, and that is my weekly bite-sized email To the Point. Would you like to get an email from me every Friday discussing a single tool, paper method, or update in the community? Past topics have covered Toyota's vehicle control history, including a coverage chart, adas, that's advanced driver assistance systems, Tesla vehicle data reports, free video analysis tools and handheld scanners. If that sounds enjoyable and useful, head to lightpointdata.com/tothepoint to get the very next one.