AI has entered the business mainstream, opening up opportunities to boost productivity, innovation and fundamentally transform operating models. As AI grows in sophistication, complexity and autonomy, it opens up transformational opportunities for business and society. More than 70% of the executives taking part in a 2017 PwC study believe that AI will impact every facet of business. Overall, PwC estimates that AI will drive global gross domestic product (GDP) gains of $15.7 trillion by 2030.
As businesses adoption of AI becomes mainstream, stakeholders are increasingly asking what does AI mean for me, how can we harness the potential and what are the risks? Cutting across these considerations is the question of trust and how to earn trust from a diverse group of stakeholders – customers, employees, regulators and wider society. There have been a number of AI winters over the last 30 years which have predominantly been caused by an inability of technology to deliver against the hype. However with technology now living up to the promise, the question may be whether we face another AI winter due to technologists’ focus on building ever more powerful tools without thinking about how to earn the trust of our wider society.
This leads to an interesting question – does AI need to be explainable (or at least understandable) before it can become truly mainstream, and if it does, what does explainability mean?
In this Whitepaper we look at explainability for the fastest growing branch of real-world AI, that of Machine Learning. What becomes clear is that the criticality of the use case drives the desire, and therefore the need, for explainability. For example, the majority of users of recommender systems will trust the outcome without feeling the need to lift the lid of the black box. This is because the underlying approach to producing recommendations is easy to understand – ‘you might like this if you watched that’ and the impact of a wrong recommendation is low (a few £ spent on a bad film or a wasted 30 minutes watching a programme on catch up). However as the complexity and impact increases, that implicit trust quickly diminishes. How many people would trust an AI algorithm giving a diagnosis rather than a doctor without having some form of clarity over how the algorithm came up with the conclusion? Although the AI diagnosis may be more accurate, a lack of explainability may lead to a lack of trust. Over time, this acceptance may come from general adoption of such technology leading to a pool of evidence that the technology was better than a human, but until that is the case, algorithmic explainability is more than likely required.
相关报告
哈佛1.6万字最新报告:中美AI霸权之争:鹿死谁手?(中英对照)
1.9w+
类型:专题
上传时间:2020-08
标签:中美、AI、人工智能)
语言:中英
金额:10元
177页AI 全景报告 2020(英)
9076
类型:行研
上传时间:2020-10
标签:AI、人工智能)
语言:英文
金额:5积分
顶级投行万字报告:人工智能对经济增长的潜在巨大影响(中英对照)
8252
类型:宏观
上传时间:2023-04
标签:人工智能、经济增长、就业)
语言:中英
金额:3元
《AI未来》读书笔记
7506
类型:读书笔记
上传时间:2021-09
标签:人工智能、AI)
语言:中文
金额:9.9元
Gartner-2024年十大战略技术趋势(中英对照)
6242
类型:专题
上传时间:2023-12
标签:技术趋势、人工智能、未来影响)
语言:中英
金额:3元
AIGC、ChatGPT-人工智能迭代的核心驱动力《生成式人工智能:AIGC的逻辑与应用》读书笔记
5352
类型:读书笔记
上传时间:2023-06
标签:人工智能、ChatGPT、AIGC)
语言:中文
金额:9.9元
《一本书读懂ChatGPT》读书笔记
5253
类型:读书笔记
上传时间:2023-10
标签:ChatGPT、人工智能)
语言:中文
金额:9.9元
贝莱德1.6万字报告:2024年中全球展望
4450
类型:宏观
上传时间:2024-07
标签:全球展望、变革转型、人工智能)
语言:中英
金额:5元
《如何高效向GPT提问》244页读书笔记by曦夕
4400
类型:读书笔记
上传时间:2023-11
标签:GPT、人工智能、AI工具)
语言:中文
金额:29.9元
英国科技部2.6万字报告:人工智能行业研究(中英对照)
4068
类型:行研
上传时间:2023-04
标签:人工智能、机器学习、经济模式)
语言:中英
金额:7元
积分充值
30积分
6.00元
90积分
18.00元
150+8积分
30.00元
340+20积分
68.00元
640+50积分
128.00元
990+70积分
198.00元
1640+140积分
328.00元
微信支付
余额支付
积分充值
应付金额:
0 元
请登录,再发表你的看法
登录/注册