Will Machines Have Mental Illness? An Interview With Dr. Roman Yampolskiy

This past week I had the chance to sit down with Dr. Roman Yampolskiy, author of the new book Artificial Superintelligence:  A Futuristic Approach.  The book is unique, and I'll have a full review next week.  What follows below is an edited excerpt from my interview with Dr. Yampolskiy.

Rob:  I love the chapter on wireheading, and will machines have mental illness.  Can you talk more about that?

Dr. Y:  A lot depends on how we get to that A.I.  Is it an upload of a human brain?  We scanned one and uploaded it.  Then that gets every problem a human being has as well.  It just makes them faster and more prominent.  If it's a reward based system, then obviously there is the desire to give it a reward channel... maximizing some utility function.  So it seems like humans do it all the time, go directly for the rewards without the work.  Pretty much any intelligent enough system will figure out how to game the system.  

Rob:  When you talk to other technically savvy people, do you feel like we are as aware as we should be about some of the coming challenges of A.I., from an ethical, political, and legal perspective?

Dr. Y:  Up until a few years ago it was almost zero understanding within the A.I. research community.  Now with all the big names coming out and speaking about it and money becoming available, many people are realizing it is something we should care about.  But still there is a large portion of people who are completely dismissive and disagree with the argument, or never heard of it.

Rob: What are your personal views on how far away we are from a real human level intelligence?

Dr. Y:  No one really knows how difficult of a problem it is.  It could be that it takes a brute force approach, enough compute power, so, things Kurzweil is saying probably make sense... or it could end up its just a formula for intelligence and some kid with a laptop discovers it and it happens in 5 years, or 2 years.  It's less likely to happen that soon, but it's possible.  Whatever safety mechanisms we need are more complex.  It's harder to create a system with these properties than just a random system so its going to take us longer to develop safety mechanisms so right now is a great time to start looking at it.

Rob:  When you look around at a lot of the A.I. research that is going on, and you think about some things in academia that haven't been broadly applied yet, is there anything that comes to mind that entrepreneurs should be looking at?

Dr. Y:  I think academic is now behind industry in some of this research.  In fact, in many cases they are collaborating or industry stole all of the professors.  So at this point I think academia is chasing industry trying to do something useful.  And because of how industry is structured they have no incentives to work on safety and slow down.  So maybe that is where academia could be beneficial.  We can afford to take the long term view.

Rob:  In your words, why should someone read your book?

Dr. Y:  Well it depends, if they are a researcher, or if they are doing work in A.I., its definitely good to see this perspective, and spend at least a few minutes thinking about the implications of your work.  If it's just general public, its just good to know what might be happening to you in the future.  I advise a lot of kids on career choices and it scares me that a lot of them, their job choices will not exist by the time they graduate.

I'm currently on the last 50 pages of Dr. Yampolskiy's book, so stay tuned for a summary later this week.  The book is worth the price just for the section on "mind design" and the philosophical questions it raises.