Hosting Servidores SSD
Diseño de páginas web
Envíame un email con tu consulta o mejor hablamos por teléfono:
Child-Friendly Autonomous VehiclesDesigning Autonomy with all road users in mind Recognising kids in Halloween costumes (Google)With Waymo putting fully self-driving cars on public US roads last week, mainstream autonomous driving is nearing reality everyday. However, there are still many hurdles in the way, and more needs to be solved than just the ‘trolley problem’. At this point not many autonomous systems are able to perceive their environment accurately enough to deal with outlier situations. They have been trained for most regular road users, but even in these situations the cars occasionally experience the ‘freezing robot problem’. To make sure interactions are safe and efficient, it is important for the industry to tackle very specific scenarios. One of these specific, and very critical, scenarios is the interaction with children. Right now pedestrian injury is one of the leading causes of child casualties in the US. Autonomous vehicles have the outstanding opportunity to improve this, but children require different computer vision analysis and interaction than the able-bodied adult. These challenges have not been solved yet.PerceptionDr. Eleonora Papadimitriou, a researcher specialised in Road Safety, Traffic Engineering and Transportation Systems, says crossing choices are significantly affected by road type, traffic flow and traffic control. She discovered three emerging components of pedestrian crossing behaviour, “risk-taking and optimisation”, “conservative and public transport user” and “pedestrian for pleasure”. These are very specific factors to take into account on how pedestrians behave on the streets and why they do so.Looking at child pedestrian injury specifically, Dr. David C. Schwebel, director of the UAB Youth Safety Lab, presents research that looks at similar factors. His findings are that most child pedestrian injuries occur in mid-block areas and places with poor overview of the street situation. A common situation in which accidents happen are ‘dart-out’ situations: a child enters the street quickly, without thought, to chase a person, toy or pet. This is an environmental situation in which you can expect a child to appear. Another cause for accidents is poor judgement by the child: he or she believes it to be safe, and enters the street when in fact the situation is not safe. A lack of perceptive and cognitive training of the child’s brain is the cause for this problem.Both these pieces of research show that the environmental as well as the human behaviour context needs to be taken into account when looking at children and traffic interaction. Trolley problem example (Iyad Rahwan/Popular Mechanics)EnvironmentConsider the following scenario: a ball rolls onto the street. Human drivers have eyes that are able to capture this situation and use the occipital lobe of their brain to do the visual processing of what happens in front of them. The outcome of this process is the understanding that it is indeed a ball that is rolling onto the street.For an autonomous vehicle, the foundation of our eyes lies in the vision sensor quality. The camera’s visual and/or depth data needs to be of a high enough fidelity to discern the environment. Then basic visual processing needs to be done. Are there any people in line of sight, where is the sidewalk, are there any cars coming from anywhere, are there any bushes obstructing the view and what is that sphere rolling across the road? These tasks by themselves required years of research to reach the level of understanding that is just now inching towards how humans can perceive the environment.Now that we know that a ball is rolling onto the street, what can you expect to come running behind it? At this step, decision-making comes into place. The future path of the ball needs to be analysed and when it crosses the path of the vehicle, it is time to start worrying and putting some seriously quick computing power behind this. Us humans have built the neuron links in our brain and match this situation with a possible child darting out. Autonomous cars will have to be trained to respond to these unexpected situations, and not just for balls, but also for frisbees, drones and more. The first decision to be made is to slow down at any of these unexpected situations. As soon as the kid pokes its head around the corner, it needs to be recognised by the pedestrian detection algorithms of the car. Redball Art Project (Jeremy F/YouTube)Even understanding that a person on the street is a child instead of an adult is quite the challenge. Analysing the size of body parts relative to one another is a way of doing this. Another interesting method that Dr. James W. Davis describes in “Visual categorization of children and adult walking styles” is by looking at the properties of relative stride length and stride frequency, with which they achieve a 93–95% accuracy in classifying children and adults (do note that a tiny dataset was used in this 2001 research). What that gives us, is at least the understanding that it is a child that the car is dealing with.These are quite some steps that the software stack of the car needs to go through. We believe that for an autonomous vehicle to be safer than a human driver, it will need to understand the whole situation within less than ¾ of a second, which is the speed at which the human brain can do so. That means that the car needs to understand its street environment, see that it’s a child and understand the child’s behaviour. For the vehicle perception to be able to do so, we’ll have to not only work on the computer vision technology, but also think about smart ways to recognise children, from a kinesiology perspective.Child behaviourA child behaves differently in the streets than an adult. In that same research paper, Dr. Schwebel describes how we as humans develop an understanding for traffic and describe the human skills that require training to interact with it. For adults, the main reasons for accidents are walking at night with poor visibility, walking while intoxicated, walking while distracted by phones and more. Children are less influenced by the previous however, their perceptive and cognitive skills are less developed. The main factors to take into account with child pedestrian injury are distraction, temperament, personality and social influence. Some of these are observable from just computer vision, take NVIDIA’s ‘co-pilot’ that includes distraction in their Advanced Driver Assistance Systems. Using just visual sensors, you are able to understand these kind of characteristics.In in 1981, Dr. Hugo H. van der Molen wrote a paper called “Child pedestrian’s exposure, accidents and behaviour”, about how children are exposed and behave in traffic. What’s interesting is that he splits the behaviour up into function/event diagrams, making it somewhat ‘software programmable’. The types of factors he recommends to take into account are: personal parameters of the child, social parameters, environmental parameters, traffic and behaviour of the child. The paper is an extremely detailed description and overview of child behaviour on the streets. However, it was written in 1981, our roads were significantly different, let alone that there was any interaction with semi- or fully autonomous cars. We believe that there should be a modern-day version of this research, taking into account autonomous vehicles as well. There have been several examples of research methods analysing pedestrian reaction to autonomous vehicles, although we have yet to see one performed specifically with children. Ford and Virginia Tech experimenting with vehicle communication (Wired)ReactionSo, let’s assume that an autonomous car is able to do all this and reaches the point of slowing down for the ball and recognising the child. Now what? After understanding the environment and the human, the car needs to decide on its future path. In a situation where there is immediate danger, it would have to perform an emergency stop. If there is no immediate danger and the car does not fully understand what the child might do, it would want to wait. Imagine the child is also not sure what to do and politely waits on the pavement. It could turn into a long stand-off, if the car was set too conservatively. Anca Dragan, part of the steering committee at Berkeley’s AI lab, describes the freezing robot problem as “anything the car could do is too risky, because there is some worst-case human action that would lead to a collision.” In that situation, the car would wait until the child has disappeared.A first reaction that the car should have, would be a change of mode to a more courteous behaviour, where it would display a more cautious interaction with the child than it would with an adult. This could mean several things; you could reduce the driving speed, you could expand the range at which the car wants to be distanced of the person before continuing to drive and more of these behaviour changes that a human driver instinctively has. You run a more friendly driving algorithm for children than for adults.One way to solve the problem is to add another layer to the decision-making tree of the vehicle. If we acknowledge that a child behaves differently to an adult, a car should respond differently as well. As shown in the visual below, the green layer adds a decision point of whether it is a child the car is dealing with and if yes, two actions: driving more carefully and using a child-specific database of body language to make future decisions on how to deal with the pedestrian. This way it will create a safer base environment and make a better-informed decision on how to interact with the child. Added layers of child interactionCommunicationThe communication aspect is an entirely new challenge. Adults have learnt road regulations and have had the time to train their perceptive and cognitive systems to understand traffic, a child does not have this experience or training yet. Currently, a child can communicate with the driver of the car. A gentle wave ahead or a honk are easily interpreted by the child, similar to how a parent would reward or punish behaviour. These communication points are added in blue to the visual above.Once the driver is taken out of this equation, a child should still be able to understand what interaction it has with the car. That means that you need to go back to the most basic of communication. Visual and auditive are the most applicable cues for the child to understand what the car might do. Perhaps the car should have a face, to show specific emotions or gestures. Or imagine adding speakers to the exterior to the vehicle and giving it voice control capabilities, like a driving Amazon Alexa. Although Google and Uber have filed patents for different methods of communicating with pedestrians, these are solutions that require further experimentation of which very few have been published so far. Smiling car (Semcon)Final thoughtsConcluding, to ensure a good interaction between children and autonomous vehicles, better perception is needed in order to be able to understand the environment and child behaviour around the car. The computer vision needs to be trained on high-risk environments such as mid-block crossings with poor overview for the pedestrian and child behaviour such as distraction. The reaction of the vehicle needs to be thought through on different levels; not only should the decision that it makes be a safe and efficient one, it also needs to communicate well to the child what is going on inside the car’s ‘brain’. Only then a child can understand what the car is doing.Eventually, we can only find out what the best way is through testing and feedback with actual children. You can’t expect adult design and engineering teams to fully understand what goes on inside a child’s brain when they see an autonomous car. By focusing on this specific situation now, we can make sure that the rollout of autonomous vehicles on public roads creates a better street environment for children than the one we currently have.This piece was written by Leslie Nooteboom, COO and Co-Founder of Humanising Autonomy. We develop natural interactions between people and autonomous vehicles. ???Get in touch with us at firstname.lastname@example.org ?
Ir a la fuente / Author: Humanising Autonomy