The EU has outlined its guidelines for ethical AI development just days after Google was forced to scrap its controversial ethics board.
The European Commission today unveiled its recommended steps for achieving “trustworthy” AI as it looks to set up a framework to ensure the new technology is used responsibly.
“The ethical dimension of AI is not a luxury feature or an add-on,” said Andrus Ansip, EU vice president for the digital single market. “It is only with trust that our society can fully benefit from technologies.”
The Commission set out a seven-point plan for ensuring AI is developed ethically, including requirements for human oversight, safety and privacy.
It comes ahead of a large-scale pilot phase, to be launched this summer, which will call on companies and public bodies for feedback.
The move is a further embarrassment for Google, which was forced to scrap its AI ethics board just a week after forming it following a controversy over its choice of board members.
The tech giant abandoned its newly-formed committee after thousands of Google employees protested against the appointment of Kay Coles James, president of think tank The Heritage Foundation.
Criticism of James’s comments on trans and LGBT people and immigration also led to a dispute between members of the ethics board, with one academic turning down his offer as a result of the controversy.
Google has said it will continue to monitor the responsible development of AI and will find alternative ways of getting outside opinions on the topic.
“Leading with an ethics-first approach requires a delicate balancing act between realising ethical goals and facilitating technological progress,” said Roch Glowacki, associate at law firm Reed Smith.
He added that the future AI regulatory landscape is likely to require so-called ethics by design.