Now, MBA working at the intersection of technology and the re/insurance value chain. In another life, physicist at CERN. Addicted to science and technology and always trying to understand how things work.
And it was during one of the Summit breaks that I bumped into Kevin Kelley. Armed with courage, I timidly introduced myself. And I thank him for Wired, for letting me dream about a better future since 1997, the year I started studying physics and get to know about the magazine. I thank him for giving me and my best friends hope. Hope in that we were not wrong, that science and technology would build a better future, that a “hippie toy” OS called Linux that made us all happy, and allowed us to experiment, was something more. That Apple was doomed and then it was not. That Google was something special. That Sun and Silicon Graphics were building hardware from another planet and then they vanished into space.
A couple of weeks ago, the Financial Times reported that Google had achieved quantum supremacy.
A paper written by Google’s researchers was apparently leaked on a Nasa website. The paper in question claims that the programatic advertising company run a “circuit sampling” experiment in over three minutes. The same problem would take Summit, number one classical supercomputer on the planet, around 10,000 years to compute.
Quantum computing (QC) has been one of my “darling” topics since I was introduced to it in 2000. Back them, a fully functional QC was still kind of science-fiction.
As I would like to keep learning about QC, its applications and future business model, I have decide to focus my blog on it moving forward.
Bitcoin’s annual electricity consumption adds up to 45.8 TWh
The corresponding annual carbon emissions range from 22.0 to 22.9 MtCO2
This level sits between the levels produced by the nations of Jordan and Sri Lanka
Just to add more perspective, Switzerland consumes an average of 60 TWh. One day, we would look back to this nonsense and wonder whether it was really necessary.
I am a sucker for old of stories about Silicon Valley and the origins of the web. But even more, I am a huge SGI fan. I remember bidding on eBay for a gorgeous O2 back in 2003… It could not be and I bought my Titanium PowerBook G4 instead.
Accountability at an algorithmic level is a fascinating topic. Especially, in a world where AI algorithms will build new algorithms without human intervention.
We human beings seem to be obsessed with black boxes: The highest compliment we give to technology is that it feels like magic. When the workings of a new technology is too obvious, too easy to explain, it can feel banal and uninteresting. But when I asked David Jensen — a professor at the University of Massachusetts at Amherst and one of the researchers being funded by Gunning — why X.A.I. had suddenly become a compelling topic for research, he sounded almost soulful: “We want people to make informed decisions about whether to trust autonomous systems,” he said. “If you don’t, you’re depriving people of the ability to be fully independent human beings.”
A clear example is Facebook’s feed and the way they tweak what a user will see. The same goes for Google’s featured “answer” to a user’s question. It seems to me a no brainier that we need to make companies accountable for their algorithmic misbehavior so we can fix the fundamental issues that they generate. In particular, on products that reach billions of people.
Three years ago, David Gunning, one of the most consequential people in the emerging discipline of X.A.I., attended a brainstorming session at a state university in North Carolina. The event had the title “Human-Centered Big Data,” and it was sponsored by a government-funded think tank called the Laboratory for Analytic Sciences. The idea was to connect leading A.I. researchers with experts in data visualization and human-computer interaction to see what new tools they might invent to find patterns in huge sets of data. There to judge the ideas, and act as hypothetical users, were analysts for the C.I.A., the N.S.A. and sundry other American intelligence agencies.
The researchers in Gunning’s group stepped confidently up to the white board, showing off new, more powerful ways to draw predictions from a machine and then visualize them. But the intelligence analyst evaluating their pitches, a woman who couldn’t tell anyone in the room what she did or what tools she was using, waved it all away. Gunning remembers her as plainly dressed, middle-aged, typical of the countless government agents he had known who toiled thanklessly in critical jobs. “None of this solves my problem,” she said. “I don’t need to be able to visualize another recommendation. If I’m going to sign off on a decision, I need to be able to justify it.” She was issuing what amounted to a broadside. It wasn’t just that a clever graph indicating the best choice wasn’t the same as explaining why that choice was correct. The analyst was pointing to a legal and ethical motivation for explainability: Even if a machine made perfect decisions, a human would still have to take responsibility for them — and if the machine’s rationale was beyond reckoning, that could never happen.
Artificial Intelligence applications are driving the design of new silicon architectures. Software is eating the word, that is a known fact. What it is news is the fact that AI software is creating the need for new custom more power efficient architectures that go beyond Kepler/Pascal/you-name-it GPU architectures:
What’s changed is a growing belief among some investors that AI could be a unique opportunity to create significant new semiconductor companies. Venture capitalists have invested $113 million in AI-focused chip startups this year—almost three times as much as in all of 2015, according to data from PitchBook, a service that tracks private company transactions.