Artificial Intelligence (AI) and advanced analytics represent one of the most significant business opportunities in the coming years. Many companies are investing heavily in these technologies. But new risks associated with these technologies become relevant. Governments and regulators around the world are beginning to think about the extent with which these risks need be formally addressed. PwC is involved in these discussions and we can provide an in-depth overview of AI-specific risks, the regulators perspective on how these risks can be addressed, and what companies need to do to be prepared.

Defining what AI is has proven to be an elusive task. We are using a broad description which defines Artificial Intelligence as the theory and development of computer systems that perform tasks that normally require human intelligence. Using this definition, AI has been long with us. But recent progress manifesting itself for instance in applications using human levels of natural language processing and speech recognition, sound generation, and computer vision, have made the problem much more pressing. We are now seeing machine learning taking a foothold in areas like knowledge processing, automated decision making, and machine reasoning. Computer systems will have a much large influence on decisions done in and by corporations as they allow for much faster processing of complex information with
automation also leading to far fewer errors.

The machine learning algorithms which are the foundation of artificial intelligence approaches replace explicit programming by humans with the data-driven parametrization of algorithms generated by machines. This leads to new challenges users have to take into account with when applying and thus relying on these algorithms. First of all, AI algorithms have no sense of morality, e.g. we cannot expect
them to apply sound judgement and a deeper and more encompassing understanding of the meaning and implications of their decisions. This is further compounded by their inability to provide reasoning about their decisions. We therefore need to develop approaches to make AI algorithms understandable and
thereby controllable for humans. Lastly, if the data which has been used to train the algorithm has been biased or does not fully cover all aspects impacting the decision, the algorithms will exhibit the same biases and shortcomings when making decisions.

Regulators are beginning to develop frameworks and publish legal regulations governing the development and application of AI algorithms. The General Data Protection Regulation (GDPR) implemented by the European Union will have sweeping impact on the creation and usage of machine learning algorithms. The
law creates a "right to explanation“, whereby users can ask for an explicit explanation of how an algorithmic decision was made about them. Designing algorithms and evaluation frameworks which avoid discrimination and enable explanation will become a necessity. Another example is the Financial Stability Board (FSB) which has issued a first report on artificial intelligence warning that technology companies
currently beyond the regulators’ reach could disrupt markets and mission critical applications in financial institutions. PwC is working with regulators on developing frameworks and implementing sound rules.

PwC is uniquely positioned as we are closely interacting with regulators, clients, and academia. We have issued a whitepaper on Responsible AI, and has developed an AI and Cognitive Automation Risk and Control Framework with over 80 criteria to manage executive, technical, operational, and functional risk. We will explain the framework and the controls required to implement it. We will also present the
Explainable AI framework, and discuss how stakeholders needs can be balanced against inevitable tradeoffs. Also, we will demonstrate the AI Trust Builder, which is showing how PwC is approaching the area of testing for bias in machine learning models and to develop approaches to deliver explainability and transparency by integrating approaches like LIME or QII in the process of model generation.

Slides

Time: 10:45 - 11:15
Track: Track 2
Speaker: Christian Westermann