While automakers have been developing increasing levels of autonomous capability for their vehicles, Toyota has been more focused on safety systems that protect human drivers in extreme situations.
But last month it took the wraps off the new autonomous vehicle platform being developed by its special r&d unit, Toyota Research Institute. Platform 2.1 advances the unit’s quest for active safety intervention, and eventually a self-driving car, along a parallel development track.
Toyota released a video showing the first demonstration of its Guardian and Chauffeur autonomous vehicle platform shot at a closed road course near its North American headquarters just outside Dallas.
In the driver-safety Guardian mode, the system used cameras inside the car and external sensors to determine that the driver was falling asleep while approaching a corner. It took control and steered the car to safety.
In autonomous Chauffeur mode, the test vehicle dodged hay bales falling off a truck in one sequence, and changed lanes to avoid a truck blocking the road in another, the video shows.
The demonstration was sort of a public debut for Toyota Research Institute, which was established 18 months ago as a startup within Toyota and has offices in Silicon Valley; Cambridge, Mass.; and Ann Arbor, Mich.
Ryan Eustice, 41, vice president of autonomous driving and head of Toyota Research Institute’s office in Ann Arbor, spoke with Staff Reporter Laurence Iliff in early October.
Q: Why is this significant for Toyota Research Institute at this moment?
A:There are a couple of reasons why this is significant. One is our rapid progress. TRI is a brand-new company for Toyota. We’ve only been around for 18 months or so. We booted up basically a brand-new organization across three different sites, hired a really fantastic team that has come together. We inherited the original self-driving car effort from the Toyota Research Institute of North America — not to be confused with us — and that has been ongoing since 2005 in terms of Toyota’s effort in automated driving. We subsumed that team into TRI and started with that base, but then just really accelerated in our rapid progress. And I think the thing that we are really excited about, and what we showed in that demonstration, is this inventive idea of our test car setup where we have this dual cockpit that lets us test both Guardian and Chauffeur ideas on the same platform.
Is this the first significant test of what Toyota Research Institute has been developing in terms of the hardware and the software?
We’ve had our efforts in the Chauffeur program, which is autonomous driving, and that we’ve been testing for a while now on public roads. But with Guardian, this is our first significant test of this concept physically. We’ve been able to do it in simulation thus far. With the dual-cockpit car, what it allows us to do is to safely test these concepts of Guardian. By no means are we suggesting that a production car would have two steering wheels in it — this is a research test that allows us to get real people into the car. They can sit in the passenger seat where we have a dual set of controls. You can drive that car like any other car, but via software now it allows us to change the way the car feels and handles in terms of how Guardian works with you to make you a better driver.
Autonomous driving is a long-term goal, but how about Guardian? Can it be rolled out much more quickly?
We do see Guardian as something that we can deploy more quickly and make a significant impact in being able to save lives sooner. Guardian is actually built on the technology backbone that goes into Chauffeur, which is the long-term goal of having a car that is responsible for the driving task 100 percent of the time, with the human as the passenger. There are many challenges to that, but that is one of the end goals in terms of capability that we are trying to develop within TRI. I think where we have a unique perspective is in terms of how we view Guardian. Guardian is hands on the wheel, eyes on the road, with the human as the primary driver. But it’s how we use the technology backbone that goes into Chauffeur to help make you a better driver, to prevent accidents. With Guardian, we see an ambitious goal of trying to create a Toyota vehicle that is incapable of being responsible for causing a crash.
Is anything like Guardian on the road today?
There are primitive forms of Guardian today. Automatic emergency braking detects that there is an object in your path and the car will automatically begin to apply the brakes for you to help slow you down and possibly even being able to stop you before you hit something. Guardian is in a similar vein but it goes far, far beyond anything we see today that is on the market. With Guardian, we imagine our vehicle to not only think about braking but also to think about steering and even acceleration. Imagine you’re going through an intersection and someone is going to run a red light and T-bone you. We imagine our Guardian car to be able to understand and predict that, and to be able to accelerate you out of the way. And that requires a huge leap forward in terms of anything that exists today.
The cars we use to test Guardian are highly perceptive. They are equipped with many, many sensors and it’s trying to sense the world around it 360 degrees at all times. It has much more information to work with than, say, many of the systems that we find in production today. It has sensors inside the car to try and monitor the driver too, because one of the things about you as a driver is your performance in driving changes throughout the course of the day. It depends on whether you’re tired, whether you just had a cup of coffee and are fresh, it depends on if you’re distracted and maybe talking to the passenger seated next to you.
When will vehicles have elements of Guardian?
We’ve publicly announced that we will be demonstrating our Chauffeur and Guardian technologies at the 2020 Olympics in Tokyo and that this will be released in the product shortly thereafter.
Is Guardian comparable to something like Level 2 autonomous driving?
I don’t think Guardian fits into those levels of automated driving. What we are trying to do is something different than that. Those automated levels of driving, the assumption is that the human gives up control of the car in different degrees of freedom. In Level 2, for example, the car is responsible for the driving task in terms of both the longitudinal and lateral control. And so the car is actually doing the driving and the responsibility of the human is to monitor that system and to be able to accept a hand-back if the system disengages, as well as a supervisory role to disengage the system if they feel that it’s in a situation it can’t handle. And I would characterize that as: We’re asking a human to guard the [artificial intelligence].
Guardian is exactly the opposite. Guardian is using the AI to guard the human. So in Guardian, we think of the human as the primary driver and Guardian is continuously running in the background, continuously monitoring the external world around the vehicle as well as the driver and trying to assess the risk of the driver and how we can use this information to supplement the driver or to intervene on their behalf to prevent accidents.
Is the race toward an autonomous car simply a matter of knowing what the hurdles are and seeing who gets there first, or are the assumptions of what autonomy should look like changing along the way?
Autonomy is actually not a solved problem. Most driving is easy, but some driving is extremely hard. And with much of the technology that has been developed over the last decade or so, the easy driving stuff, more or less, we kind of know how to do stuff like that. It’s the hard driving that’s hard. When we talk about these Level 4 systems or Level 5, the humans are always the passenger, so the car has to deal with the strange and the weird events. Suppose the water main breaks and the road turns into a river. That is hard. The person who is in the car may not be capable of the driving task at all. In fact, that is one of the goals of mobility from Toyota’s perspective.
There’s a lot of focus on who is ahead and who is behind in developing these technologies. Is there enough time for Toyota Research Institute to figure this out by the time these systems become competitive on the market?
We are very much in the game and that’s what TRI really shows. I think you’ll be hearing a lot more publicly from TRI as we go forward with the rapid progress we are making. What are some of the competitive advantages that Toyota brings to this? Much of the software that goes into achieving artificial intelligence and automation comes from machine learning. And this is not software that is written by humans anymore. It’s actually models that are trained from data. So when you look at a company like Toyota where we have on the order of at least 100 million vehicles worldwide at any given time that are deployed, that becomes a tremendous asset and resource for us to become data-rich, and to think about how we can use that data to really vastly continuously improve the performance of our systems and the capability of our systems.
With Toyota, we have the combination of TRI, which is a very nimble, agile arm of Toyota to rapidly develop technology, and the manufacturing strength and scale of Toyota in terms of volume. Toyota has a win-win combination here.
“Toyota says its ‘in the game’ on autonomous technology” originally appeared in Automotive News on 10/23/2017