A DARPA Perspective on Artificial Intelligence

What’s the ground truth on artificial intelligence (AI)? In this video, John Launchbury, the Director of DARPA’s Information Innovation Office (I2O), attempts to demystify AI–what it can do, what it can’t do, and where it is headed. Through a discussion of the “three waves of AI” and the capabilities required for AI to reach its full potential, John provides analytical context to help understand the roles AI already has played, does play now, and could play in the future.

Download the slides at:

26 comments

  1. Kristofer Pettersson says:

    If the intention is to demystify I think it was a failure. DARPA predicts a third AI wave which can perceive, reason, learn and even abstract. Sure, we /all/ do that. The question is when it will arrive. Launchbury says “it requires a lot of job”. Two other remarkable projects which required a lot of job was the Manhattan project and the Apollo project. Both were motivated by military goals rather than market goals. We generally think of market driven goals as more efficient than government goals. The third wave of AI is certainly directed by market driven goals. Isn’t it credible that free market will reach third third wave faster than DARPA can? Why wouldn’t a investor put all his money in this third wave of AI at this point in time? It seems likely that by the time his investment in first or even second wave systems would reach maturity and become competitive, the third wave would already be here and steal the market from him.

    Another indication of the speed that this development is moving is his reflection of self-driving cars that went from 0 success to almost 100% success in just a race. The Second wave has already been with us since 2012 and it happened because we suddenly started to use GPUs. That’s not even a scientific step. It’s just a coincidence that it didn’t happen sooner.

  2. Kim Weaver says:

    I noticed he left out the game GO in his analysis. This is an intuitive game which beat the best GO player in the world. How did it do it?

  3. Lucien Grondin says:

    There was  a paper some time ago about a new training method for AI that worked with fewer training examples.  The example given was precisely hand-written characters identification.  I wish I had better memory and could remember the title of the paper 🙁  Could be this though: 

  4. Timothy Busbice says:

    3rd wave is Biologic Intelligence (prome.ai) and we are working hard to make the next step a reality.

  5. Taeshawn Threatt says:

    I enjoyed the very concise and direct information given in this presentation. It clearly expressed where AI research has been and where it is going.

  6. Bankside1997 says:

    And from that third wave of AI, artificial general intelligence will be generated. Once artificial general intelligence is created and operational the emergence of artificial superior intelligence is inevitable.

    The difference between and advanced artificial general intelligence and an emergent artificial superior intelligence is non existant. The Singularity is just a word describing advanced artificial general intelligence that is now independent from human training.

  7. xXSWIZZERXx says:

    It needs emotional & descriptive feedback coupled with reason through the process of elimination. It’s that spark of you know it’s a child cos you feel it’s a child you don’t just know. You need to come up with some kind of emotional database, facial expressions, colour difference, patterns ect for feedback reasoning. This would just be one layer to add to help make the outcome more precise anyway. The more precise & quicker you can make the emotional feedback the more of a human response you’re going to get. The first place you would start is by creating a mass reference database of some sort containing emotional response. Run it through millions of different emotion response test’s & scenario’s then fine tune it. Example when you use an AI car on the side of a cliff it can see depth at the side because you taught it depth. But it has no emotional feedback of depth & danger because of the 1 dimensional way you programmed it. Another easier way to understand it is teaching a child danger & the child using that & applying it to other possible dangerous situations. That is the most precise response you’re going to get back from AI in my opinion. If you can fool it to cross-reference a database of emotions to come back with a relative emotion. Unfortunately, though a single change in a frame can change the emotional outcome. Making it infinitely impossible to ultimately find the absolute outcome so you’re just going have to best guess & use what you can to reach a high %%.

  8. mooncoder says:

    Nice talk, but confused AI with Machine Learning only. If this is “a DARPA Perspective on Artificial Intelligence”, it is a very narrowed down perspective.

  9. Im Always Right says:

    imagine a SUPER AI creating its own SUPER AI that can’t even be deciphered by humans… 😕🙁 it’s honestly very scary to think about.

  10. Jorge Gamaliel Frade Chávez says:

    Right now i would like to have a virtual assistant in order to generate random questions for 50 different exams of geometry (or mathematics) and to grade it . 😀

  11. Jorge Gamaliel Frade Chávez says:

    Right now i would like to have a virtual assistant in order to generate random questions for 50 different exams of geometry (or basic mathematics) and to grade it . 😀

  12. Jedi Sentinel says:

    your organisation is just a front. the technology already exists and the elites release it publicly when they feel like it

  13. Higgins Williams says:

    level headed may ass. this is what Mis and Dis Info is about right out i f2f the gates they say the singularity objectives being concluded are not level headed and then segways into the perception they want you to have I know jt for a fact DARPA IS EVIL AS HE’LL

Comments are closed.