
IBM Analytics18











































Q6 - What ethical, privacy, and other practical dilemmas accompany the usage of deep learning?

Alexander Lang
Any method that gets touted as "human-like" and then "infallible" has the potential to be a "Weapon of Math Destruction" (great book by @mathbabedotorg)

Mike Tamir, PhD
As with any #DataScience respecting private data with Cyber Security best practices and PII Redaction. Because DL does thrive best with large data sets, pooling data with a differential privacy layer is a path to look for in the future

Alexander Lang
And seeing examples of "we can predict criminals by their faces, using deep learning" gives me the creeps. I hope we get past this Snake Oil phase soon

jameskobielus
Deep learning as a tool for deception and manipulation. See my recent InfoWorld column: http://www.infoworld...

mark simmonds
There will always be fear that the "machines" will take control and replace the work force. This is unfounded though.

Francois Garillot
There are many, but most are not limited to deep learning. Or they are just in so far as deep learning excels at vision, audio & video tasks. @craphound delivered an introductory keynote at OSCon that touched on this https://www.youtube....

jameskobielus
Deep learning as a weaponizable technology. See my LinkedIn Pulse post: https://www.linkedin...

Nick Pentreath
I think because DL approaches have become so powerful & successful there is risk of unintended consequences - like with all mostly automated "learning systems". There does need to be checks & balances and "human in the loop" for monitoring

Dez Blanchfield
- in the same way Crypto experienced attention decades ago about the "power" it came with, Deep Learning and AI in general is now worrying folk who fear it could be used as much for evil as it could for good ;-)

mark simmonds
Using Deep Learning to make decisions that involve life / death decisions - possible devolved responsibility.
Rania Khalaf
what's creepy is that no one really knows exactly why they make the decisions they do and they can be 'tricked'

Dez Blanchfield
- I agree completely with @jameskobielus in that AI / ML / DeepLearning et al is software that can be turned into a Weapon in so many ways, not like a gun but just as potentially dangerous if more if left unchecked..

jameskobielus
Deep learning as a tool for obscuring accountability and transparency of automated decisions. See my LinkedIn Pulse post: https://www.linkedin...

Alexander Lang
It can take fake social content to new levels. There are impressive examples of generated reviews, done through DL

mark simmonds
Deep Learning machines battling out-witting other deep learning machines to win a decision or launch weapons. There has to be human decision making involved in some industries and situations #SparkDeepLearning

Dez Blanchfield
- one great example where folk fear the topic of AI and DeepLearning in particular is "how do I know it's a machine at the other end of this conversation", the Turing Test in many ways, what's real and not real..
Benjamin Herta
deep learning has an incentive to pull in any and all data possible, creating a centralized place for information that shouldn't be centralized

Francois Garillot
Also, one cool thing to know is that you can sometimes fool deep learning on vision tasks http://www.i-program... https://karpathy.git...

jameskobielus
Deep learning as a technology that we're willing to risk life & limb over, on the assumption that it's ready for primetime. See this post: https://www.linkedin...

mark simmonds
@jameskobielus Yes! 100% agree

Alexander Lang
The key challenge is to educate the public of these possibilities, so we can decide on "red lines"...

Dez Blanchfield
- if we look to Health for example, who is to set the boundaries or rules re. a #bot using Deep Learning to dispense medicines for example, and what happens if accidents or bad decisions are made
Rania Khalaf
here is a really interesting blog post about fooling DL http://karpathy.gith...

Dez Blanchfield
- a very heated debate for example here is being run it's course in Self Driving Cars for example, and we are having to decide "in an accident involving Pedestrians", who gets to live, does the car kill the Pedestrian or the Passenger?

mark simmonds
On the upside it can remove human emotion and political bias , personal agendas to make the "right decision" is certain situations. #SparkDeepLearning

mark simmonds
@dez_blanchfield Great example

jameskobielus
@MLnick Agreed. Best not to oversell the accuracy of deep learning for apps upon which lives, health, safety, etc depend.

Alexander Lang
@NxtGenAnalytics I highly recommend @mathbabedotorgs book. You may be less optimistic afterwards.-)

mark simmonds
Predictive + Prescriptive analytics is a partnership for business success bit.ly/DO-1

Dez Blanchfield
- we also have to consider the fact that it is Human Nature to "game the system" in everything we do, it's in our DNA, so when we build systems, Deep Learning in particular, who governs or polices what, and why, and why, and how..

jameskobielus
@dez_blanchfield I think that a lot of that AI panic is unhinged. It's disturbing when someone like a Stephen Hawking or Elon Musk sounds an alarm for a purely speculative dystopic vision.

Dez Blanchfield
- another debate I've been seeding is "how do we govern and police the issue of one Network learning from another Network, and being influenced or biased with input from another bots output, who's checking those layers?

Dez Blanchfield
- this reminds me of the whole Uber experiment, i.e. Uber just went out and broke the law EVERYWHERE and paid the fines, the court costs, and pushed till they eventually changed the laws, or continue to do so in so many places..

Dez Blanchfield
- so many projects are forging ahead with the science part, often either without considering law or ethics, either accidentally, or deliberately, as the boundaries in this space are very very grey, too grey in many cases..

Dez Blanchfield
- AdTech and MarTech for example, the Facebook experiment, LinkedIn, Twitter, look at how fast they are moving with data we feed them, and using that "against" us in a way to push Advertising at us, and #sell us stuff for profit, who governs it

Dez Blanchfield
- I'll keep out of the Governments using AI / Deep Learning / ML topic as that will spark ( pun in tended ) up the whole Conspiracy conversation and we're only here for an hour ;-)

Dez Blanchfield
- the pace we're moving at here, due to the scale at which modern technologies like #cloud and pay as you go platforms now offer, means we can do things that cross "ethical boundaries" BIG and FAST and often unchecked

Dez Blanchfield
- when things were not moving so fast, and HPC was the space of "rarefied air" we had time to debate and decide on such things as ethical and legal and correct use of this sort of power, not we move soooo fast we are tripping up at times

Dez Blanchfield
- I keep the phrase "who watches the watchers" in mind in this space

Dez Blanchfield
- I like the phrase "just because you're not paranoid, doesn't mean someone isn't watching you" ;-)

Dez Blanchfield
- the pace / speed / rate at which we are moving, the volume of new projects, new initiatives is so great, it's a given that we cross or break through ethical or privacy boundaries either accidentally or by design ;-(

Dez Blanchfield
- it's also a fact that now the whole idea of a Citizen Data Scientist means we've build and given tools to a far more diverse set of inquiring minds, not all of those minds will have good intentions.

Dez Blanchfield
- if you use @jameskobielus's great comment re. weaponized software ( and data ) then you can draw direct parallels to guns, guns are not bad, the people who use guns to do bad things are bad, but the guns are just a device..

Dez Blanchfield
- in this case, we're building software tools, platforms, data and data-sets, and open and available "things" which anyone can take, either for free or at a low cost, and use, for good as easily for evil..

