微信扫一扫联系客服

微信扫描二维码

进入报告厅H5

关注报告厅公众号

733

可解释的人工智能(英)

# 人工智能 大小:2.29M | 页数:28 | 上架时间:2021-06-03 | 语言:英文
可解释的人工智能(英).pdf

试看10页

类型: 行研

上传者: FF

出版日期: 2021-05-31

摘要:

AI has entered the business mainstream, opening up opportunities to boost productivity, innovation and fundamentally transform operating models. As AI grows in sophistication, complexity and autonomy, it opens up transformational opportunities for business and society. More than 70% of the executives taking part in a 2017 PwC study believe that AI will impact every facet of business. Overall, PwC estimates that AI will drive global gross domestic product (GDP) gains of $15.7 trillion by 2030.

As businesses adoption of AI becomes mainstream, stakeholders are increasingly asking what does AI mean for me, how can we harness the potential and what are the risks? Cutting across these considerations is the question of trust and how to earn trust from a diverse group of stakeholders – customers, employees, regulators and wider society. There have been a number of AI winters over the last 30 years which have predominantly been caused by an inability of technology to deliver against the hype. However with technology now living up to the promise, the question may be whether we face another AI winter due to technologists’ focus on building ever more powerful tools without thinking about how to earn the trust of our wider society.

This leads to an interesting question – does AI need to be explainable (or at least understandable) before it can become truly mainstream, and if it does, what does explainability mean?

In this Whitepaper we look at explainability for the fastest growing branch of real-world AI, that of Machine Learning. What becomes clear is that the criticality of the use case drives the desire, and therefore the need, for explainability. For example, the majority of users of recommender systems will trust the outcome without feeling the need to lift the lid of the black box. This is because the underlying approach to producing recommendations is easy to understand – ‘you might like this if you watched that’ and the impact of a wrong recommendation is low (a few £ spent on a bad film or a wasted 30 minutes watching a programme on catch up). However as the complexity and impact increases, that implicit trust quickly diminishes. How many people would trust an AI algorithm giving a diagnosis rather than a doctor without having some form of clarity over how the algorithm came up with the conclusion? Although the AI diagnosis may be more accurate, a lack of explainability may lead to a lack of trust. Over time, this acceptance may come from general adoption of such technology leading to a pool of evidence that the technology was better than a human, but until that is the case, algorithmic explainability is more than likely required.

展开>> 收起<<

请登录,再发表你的看法

登录/注册

FF

相关报告

更多

浏览量

(369)

下载

(21)

收藏

分享

购买

5积分

0积分

原价5积分

VIP

*

投诉主题:

  • 下载 下架函

*

描述:

*

图片:

上传图片

上传图片

最多上传2张图片

提示

取消 确定

提示

取消 确定

提示

取消 确定

积分充值

选择充值金额:

30积分

6.00元

90积分

18.00元

150+8积分

30.00元

340+20积分

68.00元

640+50积分

128.00元

990+70积分

198.00元

1640+140积分

328.00元

微信支付

余额支付

积分充值

填写信息

姓名*

邮箱*

姓名*

邮箱*

注:填写完信息后,该报告便可下载

选择下载内容

全选

取消全选

已选 1