Algorithms: some natural, neutral world

What Makes Algorithms Go Awry?
June 07, 2015
http://www.npr.org/sections/alltechconsidered/2015/06/07/412481743/what-makes-algorithms-go-awry

algorithms, like humans, can make mistakes.
Last month, users found the photo-sharing site Flickr’s new image-recognition technology was labeling dark-skinned people as “apes”

How to limit human bias in computer programs
We can test it under many different scenarios. We can look at the results and see if there’s discrimination patterns. In the same way that we try to judge decision-making in many fields, when the decision making is done by humans, we should apply a similar critical lens — but with a computational bent to it, too.

The fear I have is that every time this is talked about, people talk about it as if it’s math or physics, therefore some natural, neutral world. And they’re programs! They’re complex programs.
They’re not like laws of physics or laws of nature. They’re created by us. We should look into what they do and not let them do everything. We should make those decisions explicitly.

related:
May 14, 2012
https://franzcalvo.wordpress.com/2015/03/09/algorithms-extending-the-power-of-the-human-mind

Dec 20, 2012
https://franzcalvo.wordpress.com/2013/12/25/big-data-has-spawned-a-cult-of-infallibility

The Flash Crash

How High-Frequency Trading Is Changing Wall Street
January 13, 2011
http://www.npr.org/2011/01/13/132629284/how-high-frequency-trading-is-changing-wall-street

On May 6, 2010, the Dow Jones industrial average dropped hundreds of points in a matter of minutes — and then recovered moments later.

Known as the “flash crash,” the incident sparked congressional hearings as well as an investigation by the Securities and Exchange Commission and the Commodity Futures Trading Commission. The two market regulators later issued a joint report in September blaming a single sale — of $4.1 billion in future contracts — for the nosedive.

On the downsides of computers making investment decisions:

“The downside is they don’t have a smell test. They don’t have the basic common sense that most of the rest of us do. So back in May, for instance, we had this incident known as the flash crash, where the stock market plunged over 500 points in the space of about 5 minutes. And no one really knows why that happened.
No one quite understands the mechanisms — the self-snowballing — which caused all of these computer programs to either pile onto the selling or else to just turn off altogether and to say, ‘This is too volatile; we’re leaving the market.’ What we do know is that there were some crazy trades which happened in those five minutes. We had stocks trading for a penny a share or $100 a share, and no human would do that.
What we think is that, left to their own devices, every so often when you have a highly complex system like this, it can just spin out of control, and it’s hard to know when or how or whether a market is going to spin out of control in that way. Most of the time, computerized trading makes things faster and better and more efficient, but sometimes you get these things called tail events — which you couldn’t ever expect — which can cause quite a lot of chaos.”

related:
Algorithms Take Control of Wall Street
12.27.10
http://www.wired.com/2010/12/ff_ai_flashtrading/all/1
seven key factors, including the judgment of his neural network

we create more than we can understand
https://franzcalvo.wordpress.com/2014/12/30/steve-jurvetson-deep-learning

Algorithms: extending the power of the human mind

Algorithms: The Ever-Growing, All-Knowing Way Of The Future
May 14, 2012
http://www.npr.org/blogs/alltechconsidered/2012/05/14/152444019/algorithms-the-ever-growing-all-knowing-way-of-the-future

If the Industrial Revolution was about extending the power of human muscle with inventions like the car, then the computer revolution is about extending the power of the human mind — and algorithms are the key to its success.

what would happen if Google Maps knew that you were looking for a new car. Maybe when you looked up directions to a party, it would suggest a route that passes right past a dealership.

… What worries him most is that we humans haven’t yet evolved to be as wary of algorithms as we are of used car salesmen.

related:
https://franzcalvo.wordpress.com/2015/06/09/algorithms-some-natural-neutral-world

https://franzcalvo.wordpress.com/2013/12/25/big-data-has-spawned-a-cult-of-infallibility

Humans + Machines = Ultimate Intelligence
Andrew Fryer
4 May 2016
https://www.microsoft.com/en-gb/developers/articles/week01may16/humans-machines-ultimate-intelligence

Algorithms and Data Structures (2011)

Problem Solving with Algorithms and Data Structures Using Python
SECOND EDITION – August 22, 2011
by Bradley N. Miller, David L. Ranum
http://interactivepython.org/runestone/static/pythonds/index.html

Implementing a Stack in Python

Recursion > Dynamic programming

Self check questions

http://runestoneinteractive.org

Big Data has spawned a cult of infallibility

Forget YOLO: Why ‘Big Data’ Should Be The Word Of The Year
by Geoff Nunberg
December 20, 2012
http://www.npr.org/2012/12/20/167702665/geoff-nunbergs-word-of-the-year-big-data

Whatever the sticklers say, data isn’t a plural noun like “pebbles.” It’s a mass noun like “dust.”

It’s only when all those little chunks are aggregated that they turn into Big Data; then the software called analytics can scour it for patterns

You idly click on an ad for a pair of red sneakers one morning, and they’ll stalk you to the end of your days.
It makes me nostalgic for the age when cyberspace promised a liberating anonymity.
I think of that famous 1993 New Yorker cartoon by Peter Steiner: “On the Internet, nobody knows you’re a dog.
Now it’s more like, “On the Internet, everybody knows what brand of dog food you buy.”

In some circles, Big Data has spawned a cult of infallibility — a vision of prediction obviating explanation and math trumping science.
In a manifesto in Wired, Chris Anderson wrote, “With enough data, the numbers speak for themselves.”

The trouble is that you can’t always believe what they’re saying.
When you’ve got algorithms weighing hundreds of factors over a huge data set, you can’t really know why they come to a particular decision or whether it really makes sense.

When I was working with systems like these some years ago at the Xerox Palo Alto Research Center, we used to talk about a 95 percent solution.
So what if Amazon’s algorithms conclude that I’d be interested in Celine Dion’s greatest hits, as long as they get 19 out of 20 recommendations right?
But those odds are less reassuring when the algorithms are selecting candidates for the no-fly list.

I don’t know if the phrase Big Data itself will be around 20 years from now, when we’ll probably be measuring information in humongobytes.
People will be amused to recall that a couple of exabytes were once considered big data, the way we laugh to think of a time when $15,000 a year sounded like big money.
But 19 out of 20 is probably still going to be a good hit rate for those algorithms, and people will still feel the need to sort out the causes from the correlations — still asking the old question, what are patterns for?

related:
https://franzcalvo.wordpress.com/2014/08/18/weighing-brain-activity-with-the-balance

May 14, 2012
https://franzcalvo.wordpress.com/2015/03/09/algorithms-extending-the-power-of-the-human-mind

June 7, 2015
https://franzcalvo.wordpress.com/2015/06/09/algorithms-some-natural-neutral-world