
Peter Burris11




















What are the differences between AI risk and other types of business risk? https://www.crowdcha...

David Floyer
AI is similar that any other types of IT risk – Risk to brand, Risk of revenue loss, risk to customers, partners & employees. They differ in the potential severity, and in new additional risks that the public and government are unfamiliar.

Lenny Liebmann
Less well-understood. More of an "unknown unknown" -- despite your worthy attempts to taxonomize it today.

David Floyer
For example, AI going wrong in robotics on a factory floor could in extreme lead to employee death. Or could lead to a failure in the product that can lead to a death of a customer. Both will lead to headline news, & demand for explanation, and significant legal action.

Tony Flath 🎙 Podcast
A1) #CrowdChat There are some similar risks like privacy & security but form a much broader perspective, including . There is also this issue of bias & missed information to fully carry out. so much more #FogComputing #EdgeComputing #distributed #AI #ActionItem #Risk

Peter Burris
@dfloyer Except that most IT risks can be traced back to some engineered antecedent. How does the probabilistic aspect of AI change IT risk?

jameskobielus
Let's peel the onion of "AI risk." Part of that risk is one's exposure to the adverse consequences of AI-infused applications doing bad things: invading privacy, doing unintended things without user authorization, etc.

Peter Burris
@LennyLiebmann Just the beginning of the conversation, of course! What might you add?

jameskobielus
Another aspect of AI risk that comes in the development, modeling, and training of AI-infused apps. Are they being modeled from the right data, are they being built from the correct predictive "features"/ variables in that data, are the models decaying predictively?

Lenny Liebmann
There's the risk that investments in AI under-perform. I see a lot of companies pouring a lot of their finite attention into AI projects that deliver incremental gains at best -- rather than non-quant vision that would potentially be much more transformative.

Tony Flath 🎙 Podcast
@dfloyer agreed and Russia working on a Robot that shoots and looks like robocop is a concern

David Floyer
Peter - I would content that all software is probabilistic in working as intended. At the beginning AI will have lower probabilities of success, but the process needs to be place to improve.

jameskobielus
Yet another aspect of AI risk is the possibility that the data scientists and other developers haven't documented their workflow precisely, preventing reproducibility, reducing acountability, exposing to compliance/e-discovery risks.

Lenny Liebmann
I mean if you're doing something at a certain scale, incremental gains are worth it, but...

Ira Michael Blonder
@dfloyer Agreed. Given the public visibility of AI, any effort carries with it the likelihood of "low probability/high impact". Operational risk folks will have to plan for this feature

David Floyer
AI governance is in its infancy, as AI is still in the first innings. One key requirement for AI is the adoption of the normal IT change cycle. Pre-testing with specific data and multiple scenarios is a first step of compliance. This testing will need to be tougher.

Peter Burris
@LennyLiebmann Great comment! Underperforming returns sets organizations on the classic path to system abandonment. AI isn't immune from practical considerations, is it?

jameskobielus
Still another type of AI risk is the risk that you don't have the right people, skills, tools, and platforms to do AI effectively, rapidly, well. This is complex expensive stuff that can cost a lot, take forever to build/train, etc.

Ira Michael Blonder
@LennyLiebmann this risk also acts as a deterrent for larger organizations who need to see near term benefit in order to justify allocating resources to an AI project

Lenny Liebmann
@mikethebbop I think there's lots of people selling AI promises -- so I see this as less of an issue. :)

Ira Michael Blonder
@jameskobielus @mikethebbop Agreed. Kai-fu Lee makes a big point about the importance of Deep Learning as the leading AI method in his "AI Superpowers" tome. Where do the heavy predictive analytics shops go if they need to migrate off of expert systems?

Ira Michael Blonder
@jameskobielus Unfortunately this is a likely scenario given the typical lax requirements in most enterprise dev shops for documentation