Sponser

Friday, April 8, 2016

Are we ready for the self-driving cars?




The technology around the world is evolving fast. And with the change, the science is producing many wonders for human convenience. Even the cars are now being made to drive by themselves. No human driver needed. And with it, the manufacturers claim, the potential of accidents due to human errors will also be diminished.

But the inevitable issue of the AI related technology also arises with the new invention. The issue of ethical dilemma. The long run debate of by what principal should we actually program the AI ethics. You might not actually know this but ethics is not so simple a matter. In many cases it’s actually very intense to point out what course of action would actually be right or better than all others. This confusion arises due to many existing principles for defining ethics. The Utilitarian approach among others is very widely accepted which stats that the best course of action is the one in which greater good is insured. But there are other approaches too. The Deontological approach states that the most ethical action is the one that best protects the rights of those affected in the situation. There are also other approaches, a combination of which can only decide for the true ethical action. But as I have already said that these approaches differ in their base, so the attitudes of people also differ in the concept of ethics itself.

So, the manufacturers of the modern technology are always in dilemma in one decision or other. Currently the debate is over how best the self-driving cars can be programmed so their action in each situation could be ensured to be ethical. But upon the survey that these manufacturers conducted to know the public view about the issue, their findings indicate towards a new emerging technological paradox.




Now let’s understand some crucial points in the programming of the cars. If the car happen to come upon a situation where either only the occupants of the car or only the people on the road could be saved, which action should the car take? The Utilitarian approach suggests that the number of occupants in real danger should be compared with the number of people on the road in real danger and the car should try to ensure the safety of maximum lives as best it could. But some ethical philosophers suggest that unlike the people on the road, the occupants actually agreed for the ride. The occupants, by occupying the self-driving cars actually agreed to a tacit contract that they are ready to bear any risks associated with the running of such cars in the road.

But however the logic is twisted, no one would actually agree to ride such cars which are programmed to their occupants in time of crises. So, the people in the survey, despite agreeing that the cars should save the people on the roads, no one agreed t actually ride such cars themselves. And therein lies the paradox.


The only road out of this paradox, I think is to make people understand how little there is any such chances of crises. The passengers are even today bearing such risks while travelling by airs or even the roadways. But the fact remains that in no way should a car try to save its occupants at ‘all’ costs. Some of such costs might actually turn out to be quite expensive for the people walking by the road sides who didn’t even benefit from this technological progress in the first place.

No comments:

Post a Comment