Up to its internal ranks, the work on Google?s artificial intelligence technologies is causing some concern. Recently, they were crystallized around the mysterious Maven project of the American army in which Google takes part.
This program intends to rely on artificial intelligence developed by Google in order to interpret video images. It could be used to improve the targeting of drone strikes. If Google has assured that its contribution to the Maven project is " non-offensive nature ", the American giant finally decided not to renew his contract after 2019.
It is in this context that Sundar Pichai, the boss of Google, publishes a series of principles to guide the group's work in artificial intelligence. He asserts that these are not theoretical concepts but " concrete standards that will actively govern our research and product development, and will have an impact on our business decisions. "
Today we?re sharing our AI principles and practices. How AI is developed and used will have a significant impact on society for many years to come. We feel a deep responsibility to get this right. https://t.co/TCatoYHN2m
– Sundar Pichai (@sundarpichai) June 7, 2018
If Google agrees not to develop AI for the use of weapons, that does not mean that there will be no collaboration with the army and governments. Cybersecurity, training, recruitment, veterans health and rescue are examples of areas.
With its principles for AI, Google takes care of its image. We are talking about social benefits, safeguards for the protection of privacy, avoiding reinforcing unfair prejudices, being accountable? even if we are not sure how all this will be controlled, including with the parent company Alphabet .
Along with Amazon, Facebook, Microsoft, IBM and more recently Apple, Google – as well as its artificial intelligence subsidiary DeepMind – was already a founding member of the Partnership on AI, which is looking into ethical principles in artificial intelligence in particular.