Tech giant Google has launched a recruitment push for its self-driving car project. The company listed 36 new jobs for Google X: positions in manufacturing and engineering, oriented towards robotics and motion control systems and sensors. The listings included roles for mechanical, automotive NVH, vehicle safety, and reliability engineers. Non-engineering positions for marketing, policy analysis, real estate, and workplace services related to the project were also publicised.
Introductions to many of the job listings state, “The self-driving car project aims to improve people’s lives by transforming mobility and making it easier and safer for everyone to get around, regardless of their ability to drive. So far, we’ve self-driven over one million miles and are currently out on the streets of Mountain View, California and Austin, Texas.”
Google has been at the forefront of autonomous vehicle development, but industry observers are still waiting to see if the company goes it alone in developing, manufacturing, and distributing its own self-driving cars or will collaborate with an existing major automaker to bring the technologies to market. Last year, Google hired former Hyundai exec, long-time Ford employee, and industry veteran John Krafick in September to head its autonomous vehicle projects. Krafcik brings extensive automotive production knowledge
Rumours had been circulating that Google would team up with Ford, but an announcement that was expected at CES did not come, and Ford focused instead on Dearborn’s partnership with Amazon. Ford’s work on autonomous vehicles hit the news last month when it helped open the University of Michigan’s MCity autonomous car test site. Ford is also on track to begin testing self-driving vehicles in California next year.
The possibility of self driving cars is becoming increasingly real. UC San Diego researchers have crafted a pedestrian detection algorithm that is quicker and more accurate than existing systems. It can identify people at a rate of 2-4 frames per second, roughly as well as humans can, and make half as many mistakes as existing systems.
The key to this new technology is that it quickly and gradually cuts out areas that don’t contain people, and only uses deep learning at the last stages, when a complex image recognition is necessary to confirm what it’s looking at. The new algorithm saves a lot of the computing power usually needed for pedestrian recognition, by limiting focus to a handful of areas instead of large chunks of the screen.
UCSD’s current system can only recognize one object type at a time. However, the team plans to have it detecting multiple object types. And the technology is not limited to vehicles – it could be used in robots, security cameras, and other devices that need to identify humans instantly.
As well as developments in technology, lawmakers and regulators are becoming increasingly receptive to the possibility of self driving cars. US vehicle safety regulators recently told Google that an artificial intelligence system could be considered a driver under federal law. This paves the way for autonomous vehicles that do not need qualified drivers in them to operate.
In a letter, The National Highway Traffic Safety (NHTSA) said: “NHTSA will interpret ‘driver’ in the context of Google’s described motor vehicle design as referring to the [self-driving system] and not to any of the vehicle occupants.” This new ruling gives Google the license to progress with its plans to develop a car with “no need for a human driver,” a proposed design that was submitted in November last year.
California has proposed draft rules that require steering wheels and a licensed driver in all self-driving cars, rules which Google feels is restrictive and would slow deployment of the technology. Google believes its autonomous vehicles have a number of advantages over traditional cars and have the potential to be safer, as human error is considered to be responsible for a large proportion of road accidents.
[Photo by Justin Sullivan/Getty Images]