Algorithms: some natural, neutral world

What Makes Algorithms Go Awry?
June 07, 2015
http://www.npr.org/sections/alltechconsidered/2015/06/07/412481743/what-makes-algorithms-go-awry

algorithms, like humans, can make mistakes.
Last month, users found the photo-sharing site Flickr’s new image-recognition technology was labeling dark-skinned people as “apes”

How to limit human bias in computer programs
We can test it under many different scenarios. We can look at the results and see if there’s discrimination patterns. In the same way that we try to judge decision-making in many fields, when the decision making is done by humans, we should apply a similar critical lens — but with a computational bent to it, too.

The fear I have is that every time this is talked about, people talk about it as if it’s math or physics, therefore some natural, neutral world. And they’re programs! They’re complex programs.
They’re not like laws of physics or laws of nature. They’re created by us. We should look into what they do and not let them do everything. We should make those decisions explicitly.

related:
May 14, 2012
https://franzcalvo.wordpress.com/2015/03/09/algorithms-extending-the-power-of-the-human-mind

Dec 20, 2012
https://franzcalvo.wordpress.com/2013/12/25/big-data-has-spawned-a-cult-of-infallibility

Algorithms: extending the power of the human mind

Algorithms: The Ever-Growing, All-Knowing Way Of The Future
May 14, 2012
http://www.npr.org/blogs/alltechconsidered/2012/05/14/152444019/algorithms-the-ever-growing-all-knowing-way-of-the-future

If the Industrial Revolution was about extending the power of human muscle with inventions like the car, then the computer revolution is about extending the power of the human mind — and algorithms are the key to its success.

what would happen if Google Maps knew that you were looking for a new car. Maybe when you looked up directions to a party, it would suggest a route that passes right past a dealership.

… What worries him most is that we humans haven’t yet evolved to be as wary of algorithms as we are of used car salesmen.

related:
https://franzcalvo.wordpress.com/2015/06/09/algorithms-some-natural-neutral-world

https://franzcalvo.wordpress.com/2013/12/25/big-data-has-spawned-a-cult-of-infallibility

Humans + Machines = Ultimate Intelligence
Andrew Fryer
4 May 2016
https://www.microsoft.com/en-gb/developers/articles/week01may16/humans-machines-ultimate-intelligence

Algorithms and Data Structures (2011)

Problem Solving with Algorithms and Data Structures Using Python
SECOND EDITION – August 22, 2011
by Bradley N. Miller, David L. Ranum
http://interactivepython.org/runestone/static/pythonds/index.html

Implementing a Stack in Python

Recursion > Dynamic programming

Self check questions

http://runestoneinteractive.org

Big Data has spawned a cult of infallibility

Forget YOLO: Why ‘Big Data’ Should Be The Word Of The Year
by Geoff Nunberg
December 20, 2012
http://www.npr.org/2012/12/20/167702665/geoff-nunbergs-word-of-the-year-big-data

Whatever the sticklers say, data isn’t a plural noun like “pebbles.” It’s a mass noun like “dust.”

It’s only when all those little chunks are aggregated that they turn into Big Data; then the software called analytics can scour it for patterns

You idly click on an ad for a pair of red sneakers one morning, and they’ll stalk you to the end of your days.
It makes me nostalgic for the age when cyberspace promised a liberating anonymity.
I think of that famous 1993 New Yorker cartoon by Peter Steiner: “On the Internet, nobody knows you’re a dog.
Now it’s more like, “On the Internet, everybody knows what brand of dog food you buy.”

In some circles, Big Data has spawned a cult of infallibility — a vision of prediction obviating explanation and math trumping science.
In a manifesto in Wired, Chris Anderson wrote, “With enough data, the numbers speak for themselves.”

The trouble is that you can’t always believe what they’re saying.
When you’ve got algorithms weighing hundreds of factors over a huge data set, you can’t really know why they come to a particular decision or whether it really makes sense.

When I was working with systems like these some years ago at the Xerox Palo Alto Research Center, we used to talk about a 95 percent solution.
So what if Amazon’s algorithms conclude that I’d be interested in Celine Dion’s greatest hits, as long as they get 19 out of 20 recommendations right?
But those odds are less reassuring when the algorithms are selecting candidates for the no-fly list.

I don’t know if the phrase Big Data itself will be around 20 years from now, when we’ll probably be measuring information in humongobytes.
People will be amused to recall that a couple of exabytes were once considered big data, the way we laugh to think of a time when $15,000 a year sounded like big money.
But 19 out of 20 is probably still going to be a good hit rate for those algorithms, and people will still feel the need to sort out the causes from the correlations — still asking the old question, what are patterns for?

related:
https://franzcalvo.wordpress.com/2014/08/18/weighing-brain-activity-with-the-balance

May 14, 2012
https://franzcalvo.wordpress.com/2015/03/09/algorithms-extending-the-power-of-the-human-mind

June 7, 2015
https://franzcalvo.wordpress.com/2015/06/09/algorithms-some-natural-neutral-world