If you have not bought a new car for a while, you may not have noticed that the future of the dashboard looks like this:
That's all. A single screen replacing all the indicators, buttons and switches on the dashboard. But behind this screen is a growing level of automation that hides a ton of complexity.
Sometimes everything you need is displayed on the screen. At other times, you have to go through the menus and watch the screen while driving. And while driving at 70 mph, try to understand if you or your automated driving system controls your car. While understanding how to use the new features, menus or reorganized user interface that might have been updated overnight.
At the beginning of any technological revolution, technology takes precedence over institutions designed to measure and regulate safety and standards. The designers and regulators of the vehicle will eventually catch up, but in the meantime, we are about to acquire a learning curve – a part of a beta test involving millions of people – on the best interface driver / vehicle.
We went through that with planes. And we relive this transition in cars. Things will break, but in a few decades we will go out on the other side, look back and ask ourselves how people have ever driven otherwise.
This is how we got here, what it will cost us and what we are going to finish.
Cars, computers and security
Automobiles undergo two major upheavals: 1) the transition from internal combustion engines to electric motors and 2) the introduction of automated driving.
But a third equally important change is also underway, that is the evolution (r) of dashboards of cars, dials and buttons to computer screens. For the first 100 years, the cars were essentially a mechanical platform – an internal combustion engine and a transmission with seats – controlled by a mechanical steering, an accelerator and brakes. The car's control instruments consisted of dials and gauges; a speedometer, tachometer, and fuel gauges, water and battery.
In the early 1970s, driving became easier, with automatic transmissions replacing manual gearshift and steering and hydraulically assisted brakes becoming the norm. The comfort features have also evolved: climate control – first heat, then air conditioning; and entertainment – AM radio, FM radio, 8-track cassette, CD and streaming media. In the last decade, GPS navigation systems have begun to appear.
At the same time, cars improved, car manufacturers struggled to improve safety. In the 1970s, the number of car-related deaths in the United States averaged 50,000 per year. More than 3.7 million people have died in cars in the United States since they first appeared, more than all war dead in the United States. (This places automakers in the category of rarefied companies – as well as tobacco companies – that have killed millions of their own customers.) Automakers have argued that talking about safety would scare customers or that additional cost of security features would put them at a competitive price disadvantage. But in reality, the style was before security.
Security systems in automobiles have gone through three generations – passive systems and two generations of active systems. Today we are entering a fourth generation – autonomous systems.
Passive security systems are features that protect occupants after a crash occurred. They began appearing in cars in the 1930s. Safety windows in the windshields appeared in the 1930s in response to terrible disfiguring collisions. Upholstered dashboards were added in the 1950s, but Ralph Nader's book had to be taken, Dangerous at full speed, switch mandated by the federal government In the United States, from the 1960s, the passive safety devices: seat belts, deformation zones, folding wheels, four-direction indicators and even more efficient windscreens. The Department of Transportation was created in 1966, but it was not until 1979 that the NHTSA (National Highway Traffic Safety Administration) began crash testing (the Institute for Highway Safety Insurance began testing in 1995). In 1984, mandatory use of the seatbelt was prescribed by the State of New York (it is now mandatory in 49 of the 50 states).
These passive safety devices began to bear fruit in the mid-1970s, as the number of auto-related deaths in the United States began to decline.
active safety systems try to prevent accidents before they occur. These relied on the invention of low-cost computers and sensors for automobiles. For example, on-chip accelerometers have made airbags possible because they were able to detect an ongoing collision. These began to appear in cars in the late 1980s and 1990s and became mandatory in 1998. In the 1990s, computers were able to analyze in real time the wheel sensors (position and slip) allowed anti-lock braking systems (ABS). This feature was finally required in 2013.
Since 2005, a second generation of active security devices appeared. They operate in the background and constantly monitor the vehicle and the space around it to detect potential hazards. They include: electronic stability control, blind spot detection, frontal collision warning, lane departure warning, rear vision video systems, automatic braking Emergency, automatic pedestrian emergency braking, automatic rear emergency braking, rear cross traffic alert and centering assistance of the lane.
Today, a fourth wave of security features appears in the form of autonomous / autonomous driving functions. These include track centering / auto steering, adaptive cruise control, congestion assistance, self-parking, full self-driving. The NHTSA (National Highway Traffic Safety Administration) has adopted the six-level SAE standard to describe vehicle automation functions:
Go over level 2 is a really hard technical problem and has been discussed in infinity in other places. But what has not received much attention is the way drivers interact with these systems as the level of automation increases and the role of driver passes from the driver to the vehicle. Today, we do not know if there are times when these features make cars less safe than most.
For example, Tesla and other cars have Level 2 and Level 3 autopilot functions. Level 2 automation, drivers are supposed to monitor automated driving because the system can give you back control of the car with little or no warning. Level 3 automation pilots are not supposed to monitor the environmentbut again, they must be ready to take control of the vehicle at any time, this time with notice.
Research suggests that drivers, when they do not actively control the vehicle, may be reading their phone, eating, watching the scenery, and so on. We do not really know how drivers will behave in level 2 and 3 automation. knowledge of the situation when they are surprised by the behavior of automation – asking: what is he doing now? Why did he do that? Or, what will it do next? There are unanswered questions about whether drivers can get / maintain enough attention to take control before hitting something. (Trust me, at high speed on highways, a take-over symbol appears while watching the landscape raise your blood pressure and, hopefully, your reaction time.)If these technical challenges were not enough for drivers, these autonomous driving features appear just as the dashboards of cars become computer screens..
We have never had cars that worked like that. Users will not only have to get used to the dashboards that are now computer screens, but they will also need to understand the subtle differences between automated and semi-automated features, as builders automobiles develop and update them continuously. They may not have much help to master the changes. Most users do not read the manual and in some cars do not even follow the new features.
But although we have never had cars that work that way, we already have planes that work.
Let's take a look at what we've learned in 100 years of command engineering and cockpit and aircraft pilot automation, as well as what it could mean for cars.
Cockpits of plane
The aircraft have undergone several generations of aircraft and cockpit automation. But unlike cars discovering automated systems for the first time, automation was introduced into aircraft in the 1920s and 1930s.
During the first 35 years of their career, the cockpits were very simple, like the first car dashboards: some mechanical instruments for speed, altitude, relative heading and the fuel. In the late 1930s, the British Royal Air Force (RAF) standardized a set of flight instruments. Over the next decade, this evolved into the "Basic T" instrument layout – the de facto standard of how aircraft flight instruments were arranged.
Motor instruments were added to measure the health of aircraft engines – amount of fuel and oil, pressure, temperature and engine speed.
Then, as the planes grew and the aerodynamic forces increased, it became difficult to manually move the control surfaces. Pneumatic or hydraulic motors have therefore been added to increase the pilot's physical strength. Mechanical devices such as yaw dampers and Mach compensation compensators have corrected the behavior of the aircraft.
Overtime, navigation instruments were added to cockpits. At first, it was simple autopilots that kept the plane straight and horizontal and on a compass course. The next addition was a radio receiver to pick up the signals from the navigation stations. Thus, the pilots could set the desired bearing of the ground station in the course deviation display, and the autopilot followed the displayed course.
In the 1960s, electrical systems began to replace mechanical systems:
- Electric gyroscopes (INS) and autopilots using VOR radio beacons (omnidirectional high frequency range) to follow a track
- auto-joystick – to manage engine power to maintain a selected speed
- Flight Director Displays – to show pilots how to fly the aircraft to a pre-selected flight speed and flight path
- weather radar – to see and avoid storms
- Instrument Landing Systems – to help automate landings by providing horizontal and vertical guidance to the aircraft.
In 1960, a modern jet cockpit (the Boeing 707) looked like this:
Although it may seem complicated, each of the aircraft's instruments displayed a single datum. The switches and buttons were all electromechanical.
Enter the glass cockpit and autonomous flight
Fast forward of today and the third generation of aircraft automation. Today's camera may look similar from the outside, but inside, four things are radically different:
- The clutter of instruments in the cockpit has been replaced by color screens creating a "glass cockpit"
- Aircraft engines have their own dedicated computer systems – Full Authority Digital Engine Control (FADEC) – to control engines autonomously.
- The engines themselves are of an order of magnitude more reliable
- Navigation systems have become true autonomous flight management systems
Today, a modern airplane cockpit (an Airbus 320) looks like this:
Today, air navigation is a concrete example of autonomous driving – in the sky. Two additional systems, Terrain Awareness and Warning Systems (TAWS) and Traffic Alert Avoidance System (TCAS), allowed pilots to see what was happening beneath them and around them, thereby increasing considerably awareness of the pilots' situation and flight safety. (Autonomy in the air is technically a much simpler problem because in the flight part of the cruise, there is much less to worry about in the air than in a car.)
Airplane navigation has become an autonomous "flight management". Instead of a heading deviation dial, the navigation information is now presented as a "moving map" on a screen indicating the position of the navigation points in latitude and longitude. The position of the aircraft no longer uses ground radio stations, but rather is determined by Global Positioning System (GPS) satellites or autonomous inertial reference units. The flight route is preprogrammed by the pilot (or downloaded automatically) and the pilot can connect the autopilot to complete the route independently. The pilots enter the navigation data into the flight management system using a keypad. The flight management system also automates vertical and lateral navigation, fuel and scale optimization, throttle settings, critical speed calculations, and take-offs and landings. .
The cockpit automation of the aircraft relieved pilots of repetitive tasks and allowed less experienced pilots to fly safely. The safety of airlines has increased considerably, with their fleet quadrupling from about 5,000 in 1980 to more than 20,000 today. (Today, most passengers would be surprised to know how long the autopilot performed the flight compared to the pilot.)
Why cars are like planes
And this is the link between what happened to the planes and what is about to happen for cars.
The disadvantage of glass cockpits and cockpit automation means that pilots no longer fly actively monitor he. And humans are particularly poor in surveillance for long periods. The pilots lost their basic skills in manual and cognitive steering due to a lack of practice and feel of the aircraft. In addition, the need to "manage" automation, particularly when entering or retrieving data using a keyboard, increased the pilot's workload, rather than reducing it. . And when systems fail, poorly designed user interfaces reduce awareness of the driver's situation and can create cognitive overload.
Today, steering errors – and not mechanical failures – are responsible for 70 to 80 percent of commercial aircraft accidents. The FAA and NTSB have analyzed accidents and have written extensively on how cockpit automation affects pilots. (Accidents such as Asiana 214 occurred when the pilots selected the wrong mode on a computer screen.) The FAA drafted the final document on how users and automated systems should interact .
In the meantime, the NHTSA (National Highway Traffic Safety Administration) found that 94% of car accidents were due to human error – poor driver choices such as inattention, distraction, driving too fast lack of judgment / performance, driving while intoxicated, lack of sleep.
NHTSA has begun to study how people will interact with screens and automation of cars. They begin to understand:
- What is the right way to design a driver-vehicle interface on a screen to display:
- Gauges and buttons of the vehicle condition (speedometer, fuel / autonomy, time, air conditioning)
- Navigation charts and controls
- Media / entertainment systems
- How do you design for the awareness of the situation?
- What is the best interface between the driver and the vehicle to display the status of vehicle automation and autonomous / autonomous driving features?
- How do you manage the information available to understand what is happening now and then plan?
- What is the proper level of cognitive load when designing interfaces for decisions to be made in milliseconds?
- What is the level of distraction of mobile devices? For example, how does your car handle your phone? Is it built into the system or do you have to fumble around to use it?
- How do you design a user interface for millions of users whose age can range from 16 to 90 years old? with a different view, a reaction time and an ability to learn new screen layouts and new features?
Some of their findings can be found in the document Human-Designed Design Guide for Driver-Vehicle Interfaces. But what is striking is that very few NHSTA documents refer to the decades of expensive lessons learned by the aviation industry. The glass cockpits and the autonomy of the planes have already traveled this route. Even though air safety lessons need to be tailored to the different reaction times of cars (planes fly 10 times faster, pilots often have seconds or minutes to react to problems, while in-car decisions must often be taken in milliseconds) they can learn a lot together. Aviation spent 9 years in the United States with only one death, but in 2017, 37,000 people died in car accidents in the United States.
There is no security clearance for your car while driving
In the United States, aircraft safety has been proactive. Since 1927, new types of aircraft (and each subset) are required to obtain FAA type approval before they can be sold and obtain a certificate of airworthiness.
Unlike aircraft, the safety of cars in the United States has been reagent. The new models do not require type approval. Every car manufacturer certifies itself that their vehicle complies with federal safety standards. NHTSA is waiting for a fault to appear and can then issue a reminder.
If you want to know if your car model will be safe in the event of a collision, check out the National Highway Traffic Safety Administration's (NHTSA) New Car Assessment Program (NHAP) crash tests. Insurance Institute for Road Safety (IIHS). ) security ratings. Both summarize the performance of active and passive safety systems in frontal, side and rollover collisions. But today, there are no equivalent evaluations on the safety of cars. while you drive them. What is considered a good or bad user interface and their crash rate is different? Does the transition between levels 1, 2 and 3 of autonomy confuse drivers to the point of causing accidents? How do you measure and test these systems? What is the role of regulators in this task?
Since NHTSA and FAA both belong to the Ministry of Transport, you wonder if these government agencies are talking to each other and actively collaborating and have integrated programs and common good practices. And if they have extracted best practices from the NTSB. And since the first efforts of Tesla, Audi, Volvo, BMW, etc., it is not obvious that they have watched the lessons of the plane.
It seems that the logical thing to do by NHTSA during this stand-alone transition is to 1) start defining "Best practices" in U / I security and automation interfaces and 2) of Test level 2 to 4 cars for more safety while driving (like crash tests, but also for situational awareness, cognitive load, etc. in a set of driving scenarios. Excellent university programs that do this research.)
However, Automated Vehicles of DoT 3.0 plan further distances the agency from the role of "best practices" in the U / I security and automation interfaces. This implies that automakers will do a good job certifying these new technologies. And No plans for safety and evaluation these new level 2-4 stand-alone features.
(Remember that the edition best practices and trial for autonomous security devices is not the same as imposing regulation to slow down innovation).
It would appear that an independent agency such as the SAE may be required to propose best practices and ratings. (Or the slim possibility that the auto industry will come together and establish de facto standards.)
The chaotic transition
It took 30 years, from 1900 to 1930, to move horses and strollers through city streets to traffic. During this period, former buggy drivers had to learn a whole new set of rules to control their cars. And the roads in those 30 years were a mixture of traffic – it was chaotic.
In New York, the turning point was 1908, when the number of cars exceeded the number of horses. The last horse-drawn wagon left the streets of New York in 1917. (It took another one or two decades to remove the horse from farms, transit and railcar delivery systems.) We're on point to make the same transition.
Cars are on the road to full autonomy, but we see two different approaches to reaching level 4 and 5 driverless cars. Existing car manufacturers, attached to existing car designs, are tackling this step by step – adding additional levels of autonomy over time – with new models or updates; while new start-ups (Waymo, Zoox, Cruise, etc.) are trying to go to level 4 or 5.
We'll be around 20 years old with roads filled with millions of cars – some manually driven, some with driver level 2 and level 3, and other autonomous vehicles with 4/5 levels of autonomy. It may take at least 20 years before autonomous vehicles become the dominant platforms. In the meantime, this mix of traffic will be chaotic. (Some suggest that during this transition, we need the autonomous vehicles to have panels on the rear window, like the student drivers, but this time by saying "Attention AI on board.")
Since there will be no government best practices for U / I or self-safety scores, learning and discovery will take place on the road. This makes it essential for automakers to have direct link updates for the dashboard user interface and automated driving features. Incremental and iterative updates will add new features, while correcting bad ones. Engaging customers to make them understand that they are part of the trip will ultimately make it a successful experience.
My bet is as if the airplanes passed in front of glass cockpits with increasingly automated systems: we will create new ways for drivers to drop their cars, while increasing the overall safety of vehicles.
But over the next two or three years, as the government tells automakers to "roll yours," it will be a hell of a race.
- Car dashboards go from dials and buttons to computer screens and the introduction of automated driving
- Computer Screens and Autonomy Will Create New Problems for Drivers
- There are no standards to measure the safety of these systems
- There are no standards on how information is presented.
- Aircraft cockpits are 10 to 20 years ahead of car manufacturers to study and solve this problem
- Regulators of automobiles and aircraft must share their knowledge
- Automakers Can Reduce Accidents and Deaths if They Turn to Aircraft Cockpit Design to Take Car User Interface Courses
- The Ministry of Transport has removed barriers to the rapid adoption of autonomous vehicles
- Automakers "self-certify" if their U / I ratio and autonomy are safe
- There is no equivalent of collision safety scores for driving safety with standalone features
- Wireless updates for car software will become essential
- But the disadvantage is that they could dramatically change the U / I ratio without warning
- On the way to full autonomy, we will have three generations of cars on the road
- The transition will be chaotic, so stick to a bumpy ride, but the destination – security for all travelers – will be worth it
Filed under: Innovation, Scientific and Industrial Policy, Technologies |