I am a sucker for old of stories about Silicon Valley and the origins of the web. But even more, I am a huge SGI fan. I remember bidding on eBay for a gorgeous O2 back in 2003… It could not be and I bought my Titanium PowerBook G4 instead.
The first precept of industrial design is that the function of an object should determine its design and materials.
Yvon Chouinard, Let My People Go Surfing.
Accountability at an algorithmic level is a fascinating topic. Especially, in a world where AI algorithms will build new algorithms without human intervention.
We human beings seem to be obsessed with black boxes: The highest compliment we give to technology is that it feels like magic. When the workings of a new technology is too obvious, too easy to explain, it can feel banal and uninteresting. But when I asked David Jensen — a professor at the University of Massachusetts at Amherst and one of the researchers being funded by Gunning — why X.A.I. had suddenly become a compelling topic for research, he sounded almost soulful: “We want people to make informed decisions about whether to trust autonomous systems,” he said. “If you don’t, you’re depriving people of the ability to be fully independent human beings.”
A clear example is Facebook’s feed and the way they tweak what a user will see. The same goes for Google’s featured “answer” to a user’s question. It seems to me a no brainier that we need to make companies accountable for their algorithmic misbehavior so we can fix the fundamental issues that they generate. In particular, on products that reach billions of people.
Three years ago, David Gunning, one of the most consequential people in the emerging discipline of X.A.I., attended a brainstorming session at a state university in North Carolina. The event had the title “Human-Centered Big Data,” and it was sponsored by a government-funded think tank called the Laboratory for Analytic Sciences. The idea was to connect leading A.I. researchers with experts in data visualization and human-computer interaction to see what new tools they might invent to find patterns in huge sets of data. There to judge the ideas, and act as hypothetical users, were analysts for the C.I.A., the N.S.A. and sundry other American intelligence agencies.
The researchers in Gunning’s group stepped confidently up to the white board, showing off new, more powerful ways to draw predictions from a machine and then visualize them. But the intelligence analyst evaluating their pitches, a woman who couldn’t tell anyone in the room what she did or what tools she was using, waved it all away. Gunning remembers her as plainly dressed, middle-aged, typical of the countless government agents he had known who toiled thanklessly in critical jobs. “None of this solves my problem,” she said. “I don’t need to be able to visualize another recommendation. If I’m going to sign off on a decision, I need to be able to justify it.” She was issuing what amounted to a broadside. It wasn’t just that a clever graph indicating the best choice wasn’t the same as explaining why that choice was correct. The analyst was pointing to a legal and ethical motivation for explainability: Even if a machine made perfect decisions, a human would still have to take responsibility for them — and if the machine’s rationale was beyond reckoning, that could never happen.
Artificial Intelligence applications are driving the design of new silicon architectures. Software is eating the word, that is a known fact. What it is news is the fact that AI software is creating the need for new custom more power efficient architectures that go beyond Kepler/Pascal/you-name-it GPU architectures:
What’s changed is a growing belief among some investors that AI could be a unique opportunity to create significant new semiconductor companies. Venture capitalists have invested $113 million in AI-focused chip startups this year—almost three times as much as in all of 2015, according to data from PitchBook, a service that tracks private company transactions.
[…] as I mentioned earlier in the podcast, we… we discovered Bitcoin in beach in Ibiza […]
So now you know.
Matt Levine for Bloomberg:
There are two massive areas of job opportunity for data scientists: They can build models that help hedge funds trade stocks and bonds, or they can build models that help internet companies sell advertisements on web pages. Oh or they can build models that help cure cancer or whatever, but compared to financial trading and internet advertising that is a small and unprofitable niche. One of the most incredible feats of marketing of our century is that the internet companies have convinced a lot of people that selling advertisements on web pages is basically the same as curing cancer, while buying stocks and bonds is evil:
“At tech companies, the permeating value is that they’re about trying to make the world a better place, whereas at hedge funds it’s about making more money,” Mr. Epstein said.
As a former scientist, I cannot stop thinking about what would happen if we take all this human computing power and apply it to solve fundamental problems that impact society and the human species.
However, I do not buy the argument that selling advertisement on webpages is not perceived as “bad” so the same should happen with hedge funds. Selling advertisement on webpages has generated enormous value in terms of by-products that are quite tangible, e.g. ML translation. What has gave us hedge funds?
Maciej Cegłowski’s talk at Republica Berlin 2017 really makes you reflect on the status quo of the tech industry.
Little did I know when I started my career as a research physicist at CERN that I was a member of the Bayesianist “tribe”. In fact, I was not even aware back then that what we called
data analysis “another day working with data” was even a branch of the Machine Learning religion.
The content below is from the The Master Algorithm by Pedro Domingos. Formatting all mine.
|Symbolists||All intelligence can be reduced to manipulating symbols||Inverse Deduction: It figures out what knowledge is missing in order to make a deduction go through, and then makes it as general as possible|
|Connectionists||Learning is what the brain does, and we need to reverse engineer it||Back Propagation: It compares a system’s output with the desired one and then successively changes the connections in layer after layer of neurons so as to bring the output closer to what it should be|
|Evolutionaries||The mother of all learning is natural selection||Genetic programming: It mates and evolves computer programs in the same way that nature mates and evolves organisms|
|Bayesians||All learned knowledge is uncertain, and learning itself is a form of uncertain inference||Bayes’ theorem: It tells us how to incorporate new evidence into our beliefs, and probabilistic inference algorithms do that as efficiently as possible|
|Analogizers||The key to learning is recognizing similarities between situations and thereby inferring other similarities.||Support vector machine: It figures out which experiences to remember and how to combine them to make new predictions|
I found this fascinating: Cade Metz for Wired:
As detailed in a research paper published by OpenAI this week, Mordatch and his collaborators created a world where bots are charged with completing certain tasks, like moving themselves to a particular landmark. The world is simple, just a big white square—all of two dimensions—and the bots are colored shapes: a green, red, or blue circle. But the point of this universe is more complex. The world allows the bots to create their own language as a way collaborating, helping each other complete those tasks.
You could thing about this as a new level of Cryptophasia, i.e. language created by twins that only the two children can understand. Some might say that it is scary, some might say that it is amazing that we are getting to this level of reinforced learning.
Johana Bhuiyan for Recode:
The Chinese company’s new U.S. lab, which will focus on intelligent driving systems and AI-based security for transportation, also formalizes what many already knew: Didi is working on self-driving cars.
The company has already partnered with Udacity — a college-level nanodegree startup — on its self-driving program, at the end of which Didi and a number of other partnering companies get first pick of the graduates the companies want to hire.
Didi did not only acquired Uber China assets last year but it is also actively poaching AV talent from
I can just but imagine the potential of AVs deployed at a Didi scale in China in a near future.