一级日本牲交大片好爽在线看,18禁裸乳无遮挡自慰直播网站,自拍偷区亚洲网友综合图片,看骚逼Tv,Com,天天干天天屌天天草,五月激情六月丁香,欧美熟妇vdeoslisa18

New Australian institute formed to study ethics of artificial intelligence

Source: Xinhua| 2018-12-13 18:49:44|Editor: xuxin
Video PlayerClose

SYDNEY, Dec. 13 (Xinhua) -- Three eminent Australian organisations announced on Thursday they will collaborate to create the Gradient Institute, an independent non-profit body, to research and implement the ethical behaviour of artificial intelligence (AI).

A joint venture by the Commonwealth Scientific and Industrial Research Organisation (CSIRO), the University of Sydney (UoS), and the Insurance Australia Group, the institute will be led by chief executive Bill Simpson-Young, who told Xinhua that AI is already omnipresent in the daily lives.

"When people talk about AI, usually they're talking about machine learning," Simpson-Young explained.

Machine learning is an application of AI, whereby systems learn and improve from experience without being programmed by a human operator.

It is used every time a new article or product is recommended to a consumer; many companies use it to filter employment candidates and most banks use it when deciding who to give a home loan to.

With the likelihood of these and other forms of AI permeating increasingly throughout the societies, experts believe it is imperative that ethical boundaries or guidelines are created to ensure that automatic systems are operating in humanities best interest.

"Humans make decisions everyday, and we're making decisions based on an ethical framework," Simpson-Young said.

"We've now got a situation where machines are making decisions every day, about what newsitem you read, about who should be matched up with to date, and so on -- but there is no ethical framework at the moment for the machines."

However, with humanity itself thus far out of reach of ethical consensus, scientists are faced with the difficult problem of not only formulating ethical guidelines, but making them stringent enough to program into code.

"What we have at the moment is questions around what it mean to be ethically human, and we don't really know how to answer that question -- so it's incredibly hard to ask that of an AI," UoS senior lecturer Michael Harre said.

According to Harre, the first step is creating large data sets of what people expect of morality in order to formulate an approach.

The Gradient Institution will incorporate input from a broad range of academic branches including the humanities and law as well as engineers and data analysts.

"AI is steering our conversations to try to answer questions that we've had as humans for a very long time," Harre said.

"We're asking what do we want out of our interactions with other people, and that becomes what do we want out of our interactions with AI, and that gives us a very interesting and new perspective on it."

TOP STORIES
EDITOR’S CHOICE
MOST VIEWED
EXPLORE XINHUANET
010020070750000000000000011100001376718881