In a post today on its Google+ page, the Google Self-Driving Car Project announced that its latest cars were being tested on public roads starting today. The odd-shaped little cars are the third generation of self-driving car technology that Google has been testing since 2012.
Drivers around Mountain View might see the little cars rolling around suburban streets and the Google campus, joining the existing fleet of modified Lexus RX450h and Toyota Prius models. Each car will have a "safety driver," although Google's ultimate vision is to do away with standard controls such as the steering wheel and accelerator.
The cars are part of an ongoing project to develop self-driving cars that could potentially be safer than human drivers. Automotive equipment supplier Delphi Labs, also in Mountain View, has been testing its own self-driving car technology on public roads, making the area a hotbed for this type of technology. Automakers, such as Nissan and Ford, have predicted that autonomous cars will become available to the public by 2020.
Google notes that its new self-driving cars, which use electric propulsion, have a top speed of 25 mph. The Google+ post asks "to hear what our neighbors think," seeming to elicit comment from drivers who encounter the cars.
link.
3 comments:
The only problem with driverless car is can the AI handle ethical issues.
I totally forgot about that (since I support driverless cars).
The issues:
1) Will the car break the law to save lives?
- ie. will it drive faster than the posted speed limit to avoid a hitting someone.
- Will it cross the double yellow line and hit an on-coming car and kill the driver to avoid hitting 5 people on the side of the road.
Once that is solved, then can driverless car make it (IMO).
Actually, there are those who are working on that. I've seen lectures on youtube and elsewhere on the topic. I'll see if I can find them.
The classic case raised in those lectures is whether or not a self driving car ought to try to save the life of the person in the car if doing so would kill more people who are not.
Part of the problem with writing those rules comes from the fact people are not consistent in their value judgments. :)
I'm sure Isaac Asimov's 3 robot rules will come to.
Along with any additional items.
Post a Comment