actionitem

Managing AI Risk
AI's impacts are hotly debated -- and usually poorly understood. What risks is AI presenting? And how must leaders address those risks? Let's CHAT!
   2 years ago
#actionitemThe Journey to Multi-cloudHow will enterprises evolve their high-value legacy applications to the cloud experience? Public cloud advocates say "migrate." Grizzled infrastructure pros know the poor risk/return ratio of migrations. Is there a modern approach to legacy modernization? Join us for a Crowdchat.
Peter Burris
What are the differences between AI risk and other types of business risk? https://www.crowdcha...

David Floyer
AI is similar that any other types of IT risk – Risk to brand, Risk of revenue loss, risk to customers, partners & employees. They differ in the potential severity, and in new additional risks that the public and government are unfamiliar.
Lenny Liebmann
Less well-understood. More of an "unknown unknown" -- despite your worthy attempts to taxonomize it today.
David Floyer
For example, AI going wrong in robotics on a factory floor could in extreme lead to employee death. Or could lead to a failure in the product that can lead to a death of a customer. Both will lead to headline news, & demand for explanation, and significant legal action.
Tony Flath 🎙 Podcast
A1) #CrowdChat There are some similar risks like privacy & security but form a much broader perspective, including . There is also this issue of bias & missed information to fully carry out. so much more #FogComputing #EdgeComputing #distributed #AI #ActionItem #Risk
Peter Burris
@dfloyer Except that most IT risks can be traced back to some engineered antecedent. How does the probabilistic aspect of AI change IT risk?
jameskobielus
Let's peel the onion of "AI risk." Part of that risk is one's exposure to the adverse consequences of AI-infused applications doing bad things: invading privacy, doing unintended things without user authorization, etc.
Peter Burris
@LennyLiebmann Just the beginning of the conversation, of course! What might you add?
jameskobielus
Another aspect of AI risk that comes in the development, modeling, and training of AI-infused apps. Are they being modeled from the right data, are they being built from the correct predictive "features"/ variables in that data, are the models decaying predictively?
Lenny Liebmann
There's the risk that investments in AI under-perform. I see a lot of companies pouring a lot of their finite attention into AI projects that deliver incremental gains at best -- rather than non-quant vision that would potentially be much more transformative.
Tony Flath 🎙 Podcast
@dfloyer agreed and Russia working on a Robot that shoots and looks like robocop is a concern
David Floyer
Peter - I would content that all software is probabilistic in working as intended. At the beginning AI will have lower probabilities of success, but the process needs to be place to improve.
jameskobielus
Yet another aspect of AI risk is the possibility that the data scientists and other developers haven't documented their workflow precisely, preventing reproducibility, reducing acountability, exposing to compliance/e-discovery risks.
Lenny Liebmann
I mean if you're doing something at a certain scale, incremental gains are worth it, but...
Ira Michael Blonder
@dfloyer Agreed. Given the public visibility of AI, any effort carries with it the likelihood of "low probability/high impact". Operational risk folks will have to plan for this feature
David Floyer
AI governance is in its infancy, as AI is still in the first innings. One key requirement for AI is the adoption of the normal IT change cycle. Pre-testing with specific data and multiple scenarios is a first step of compliance. This testing will need to be tougher.
Peter Burris
@LennyLiebmann Great comment! Underperforming returns sets organizations on the classic path to system abandonment. AI isn't immune from practical considerations, is it?
jameskobielus
Still another type of AI risk is the risk that you don't have the right people, skills, tools, and platforms to do AI effectively, rapidly, well. This is complex expensive stuff that can cost a lot, take forever to build/train, etc.
Ira Michael Blonder
@LennyLiebmann this risk also acts as a deterrent for larger organizations who need to see near term benefit in order to justify allocating resources to an AI project
Lenny Liebmann
@mikethebbop I think there's lots of people selling AI promises -- so I see this as less of an issue. :)
Ira Michael Blonder
@jameskobielus @mikethebbop Agreed. Kai-fu Lee makes a big point about the importance of Deep Learning as the leading AI method in his "AI Superpowers" tome. Where do the heavy predictive analytics shops go if they need to migrate off of expert systems?
Ira Michael Blonder
@jameskobielus Unfortunately this is a likely scenario given the typical lax requirements in most enterprise dev shops for documentation
Peter Burris
How would you explain to a CxO the role that people should play in AI governance? https://www.crowdcha...

Lenny Liebmann
Is HR exclusively responsible for people? No. LOB managers, IT and facilities security, etc. all touch the employee. AI is an "employee."
David Floyer
The key factor to be explained is the higher level of risk. Examples of how AI risk mitigation is achieved on other organizations would be useful, but in short supply. Many leading edge organizations are going to have to learn by cautious experience.
Peter Burris
@LennyLiebmann Interesting. Wait for the question on vendor relationships, then I'll ask if AI is a "contractor."
Tony Flath 🎙 Podcast
Q4) #CrowdChat have to educate that #AI will more and more be developed by the needs of people defining and establishing a new frontier of this #EmergingTechnologies old school command and control doesn't work for true #automation both for management and technology
jameskobielus
To the CxO, I'd point out that AI generally doesn't build itself (though automation is coming big to that). People (ie., data scientists, data engineers, business analysts, etc.) build, training, and manage this stuff. They bear responsibility (good and bad).
jameskobielus
People's role in AI governance is to keep a complete, tidy, comprehensible audit trail of every step in every process that deployed AI into production apps. Ideally, people (who develop this stuff) should automated tools that log the decisions/actions they take.
Peter Burris
How will vendor relationships have to evolve to ensure enterprises can sustain chosen AI risk profiles? https://www.crowdcha...

Lenny Liebmann
We should have John Doernberg or someone like that weigh in on indemnification. Can't wait to see how underwriters craft insurance policies for this stuff.
David Floyer
AI is in its infancy, and as in earlier IT stages, enterprises at the leading edge will produce their own solutions. Over time, specialized equipment and software vendors will achieve volume in specific areas, and will reduce risk by better testing and increased volume.
Tony Flath 🎙 Podcast
A) #CrowdChat way way more focus on vendor to have embedded #governance built in less and less push on customer to manage data like with #public #cloud I see market demand pushing vendor
jameskobielus
If you're sourcing AI from a third party, you need to know the provenance of the data, models, code, etc that went into it. Did it violate IP protections? Was every step in the data prep, modeling, and training well documentedd? Is the supplier vouching for AI accuracy?
David Floyer
The majority of enterprises and individuals will buy their solutions in the volume phase of AI.
Peter Burris
@LennyLiebmann I was going to mention this under your "AI = employee" comment. Most software today is covered by copyright law, which makes most software output a protected speech act. Now, there are case limits to that (as I understand it), but AI is going to do much more.
jameskobielus
In terms of sourcing AI, the buyer needs to negotiate with the supplier the terms of ongoing model retraining. Is the supplier going to do that? Is it the buyer's responsiblity? Models decay, and the AI you acquire today may be useless tomorrow if you don't retrain
jameskobielus
If the AI supplier indemnifying the buyer against some or all of the risks that may come from deploying that model into particular downstream apps?
Ira Michael Blonder
Great question. I would add to points already made the following. Vendor endpoints will have to be thoroughly tested for vulnerabilities to ensure enterprise security standards are implemented correctly
Lenny Liebmann
meant the analogy more in the context of cross-functional responsibility.HR takes the lead -- but has both structured and ad hoc process flows with other stakeholders.
jameskobielus
Machine learning models are a derived asset (derived/abstracted from data). A key issue is whether the ML model you've built or sourced is abstracted enough during the modeling process so that it doesn't violate IP protections on the data. I'm not a lawyer, but I wonder
Peter Burris
@LennyLiebmann Can you recommend a John Doernberg contribution to read or watch?
Peter Burris
Welcome to our Action Item, our first on AI Risk. Looking forward to the conversation.
jameskobielus
Nice to be here. Chatting from the city by the bay, Which bay, you ask? San Francisco. Which city? The one that shares its name with that bay.
Ira Michael Blonder
@jameskobielus: Can you provide further detail on your notion of how an autonomous vehicle could be vulnerable to adversarial activity? Do you see this vulnerability even if the vehicle is, effectively, network locked down? Nutonomy, I believe has made this claim
jameskobielus
@mikethebbop Convolutional neural nets are the core of object recognition and smart cameras. CNNs are easily fooled by imperceptible (to humans) "perturbations" in images. They might be tricked to misinterpret a stop sign as a yield sign, for example.
Peter Burris
Air gaps provide the only approach to ensuring bad actors can't impact a system. But an air gap means the car is neither sharing information about location position and speed nor gathering information about route and optimization. Don't see air gaps as a likely approach.
jameskobielus
@mikethebbop "Locking down" a network that connects autonomous vehicles does nothing to prevent CNNs and other ML/DL models from being susceptible to adversarial attacks. These vulnerabilities can only be mitigated in the modeling/training workflow.
Ira Michael Blonder
@jameskobielus @mikethebbop thanks. These convolutional neural nets can be "built in" to the autonomous vehicles "edge" processes &, therefore, won't require a network connection to do attack
Peter Burris
How will application development evolve to better institutionalize AI governance in digital systems? https://www.crowdcha...

Lenny Liebmann
Very, very slowly. :)
David Floyer
In the short term, AI (and data scientists) will be separate from traditional application development. In a few years, AI will move into application development as standard practice.
Tony Flath 🎙 Podcast
A) #CrowdChat I see application development will evolve to be much more #crowdsourced #opensourced approach to manage, already seeing lots of free #AI edcucation and #vendor neutral
jameskobielus
App dev will evolve by broadening the reach of CI/CD repositories to encompass ML models and attendant artifacts. That's why big data catalogs need to encompass models, as well as data, metadata, APIs, code, etc. It's all fundamental to AI app governance.