By now you have probably read about OpenAI, the project started by Elon Musk and Sam Altman. The purpose of OpenAI is to do top notch AI research and open source it. The idea behind the group is that, if anyone and everyone has access to cutting edge AI technology, then creating a killer AI is difficult because there will be a plethora of AIs to balance out any bad AI. Before this group existed, I wasn't worried about a killer AI. Now I am.
To explain why, let me start with an analogy. Nuclear bomb technology is dangerous and destructive. Should we open source nuclear technology and make it easy for everyone to build a nuclear bomb, under the idea that if a few bad guys have nukes but more good guys do, then we are safe? Now admittedly, AI technology is different than nuclear bomb technology, so I want to highlight a few examples of where they are different, and show why I believe AI technology is even more dangerous to open source.
One of the interesting things about AI, that was highlighted in both Nick Bostrom's book and Roman Yampolskiy's book on the dangers of future AI, is that they both point out that we don't know exactly what the "takeoff" of an AI looks like. If there are accelerating returns to intelligence, which most people in the field believe there are, then it is possible that creating the first super AI, just by a few hours (or even minutes!) could set off a series of events that put that AI on top of the world forever. Why? Because the first AI to start to improve itself, with just a few extra hours of learning and improving (machines can do A LOT in a few hours) could build a gap that now no further AI could ever close. By getting smarter faster, and having a small head start, it could also figure out how to stop, delay, and damage any competing AIs.
If this scenario isn't true, and AIs don't have a rapid accelerating takeoff, then the things OpenAI worries about aren't things we need to worry about. Given a bit of time and resources, smart people will quickly figure out how to catch a non-accelerating AI. So, OpenAI only needs to worry if there is an accelerating AI. Let's work from that assumption to show three reasons why OpenAI is a problem.
1. At the moment, we don't know where the key breakthroughs in AI technology will come from. By putting top tier AI technology in the hands of anybody, we increase the chances that it gets in the hands of someone careless, or worse, nefarious. Hackers with bad motives will now have access to tools and techniques that would have otherwise been off limits to them.
I believe Musk and Altman would probably counter that, since we don't know where the key AI breakthrough will come from, its all the more important that we open source this stuff and encourage collaboration. But I believe that in general, there is a correlation between intelligence and wisdom, and if it takes an IQ in the top 1% to solve AI, the odds of someone in that group being wise about the uses and safety are more likely than if everyone in the top 20% of IQs gets to play around with it.
2. The odds are unlikely that OpenAI will pursue the right technologies, thus making the group irrelevant. If you don't participate deeply in the A.I. community, you probably don't realize the vast number of approaches to generalized intelligence that exist, and how different they are. The Convolutional Deep Learning neural net approach is hot right now, but Probabilistic Programming and other Bayesian approaches are threatening to dethrone deep learning. Plus you have guys like Doug Hofstaedter still working on analogy based approaches, Watson and IBM's cognitive computing and symbolic logic approaches, and Numenta's HTM approach. On top of that you have hardware based approaches like spiking neuron chips and neuromorphic engineering. I don't see how OpenAI can possibly stay on top of all of these.
You could argue though that surely a group with the backing and reputation of OpenAI will have good access to all these ideas and therefore can quickly come up to speed on a new technology if it proves promising and looks like it might be the key breakthrough, right? Well, keep in mind that convolutional neural networks, the current hot fad in AI, were laughed at for two decades and had very few people working on them until Yann LeCunn and team exploded on the scene and shockingly beat the pants off every other image classification approach a few years ago. It's highly possible that the key breakthrough idea in AI ends up being something that was ignored for many years before it has its breakthrough moment.
3. The OpenAI approach assumes companies like Google and Facebook, who are on top of the AI world, will be careless or evil if they create an AI. But will they? I personally feel like if Mark Zuckerberg or Larry Page was in control of the world's first superintelligence, that we are probably in a pretty safe spot. Those are two people who have no incentive to destroy the world, and have made enough money that their operational motives at this point are less financial and more ideological. I think they would both be very thoughtful about AI development.
For me then, the status quo of having most of the AI brainpower concentrated in academia, Google, Facebook, and a few other companies, is actually very comforting as I think about possible negative AI consequences. And OpenAI, on the other hand, scares the shit out of me.
But when I back up and think about it, Musk and Altman are probably much smarter about this stuff than I am, so surely everything I have just mentioned has already occurred to them. So why did they create OpenAI? In my opinion, its because they don't have a major stake in Google or Facebook, and they want a way to get an economic stake in whatever comes next. It's a way to weaken those companies and get their own piece of the upside of AI economics. Much the way that I use this blog to selfishly generate leads for angel investing in AI, I believe Musk and Altman are just doing this to increase their chances of being part of the next deca-billion dollar tech company. They aren't worried about AI taking over the world, they are just worried they will miss out when that step function breakthrough occurs.