Artificial Intelligence vs humans | Jim Hendler | TEDxBaltimore

Artificial Intelligence vs Humans – Jim disagrees with Stephen Hawking about the role Artificial Intelligence will play in our lives.

Jim is an artificial intelligence researcher at Rensselaer Polytechnic Institute, and one of the originators of the Semantic Web.

This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at

30 comments

  1. Brad Deal says:

    Tell me again why self driving cars in San Francisco is a good thing? In an
    era where jobs are hard to find, we devise even more methods to automate
    industry, thus increasing its efficiency, and decreasing the need for for a
    worker’s contribution. I can remember as a child in 8th grade reading in
    the Weekly Reader that automation was good and that any jobs displaced by
    the automation was made up by the jobs created by building and supporting
    the automation itself. Wrong. Artificial Inteligence will widen the schism
    between the upper and lower classes by requiring a handful of very
    specialized jobs, and leaving the vast majority of biologics out of work,
    or doing menial labor. Self driving cars are the result of self driving
    tanks in the war zones of the Middle East. Put a gun on a self driving car
    and what do you have? Mr. Hawking is dead on right. In the short term, AI
    will be great, just like Heroin, but once we’re hooked it will be
    manifested into something that is unimaginable to us now. Does anyone
    really think that someone like Zuckerberg would be wise enough to control
    such a powerful force, or would he be seduced by the tremendous short term
    power it would provide?
    Brad Deal

  2. Yash Mehta says:

    the point he is trying to make is human should not fear of his own
    creation, but instead design ai in way that human machine partnership will
    be needed to save humanity.

  3. ProtonCannon says:

    What a terrible talk… This guy completely sidesteps the point that
    Hawking had made. He provides no logical counter argument to Hawking’s he
    just starts to talk about Harry Potter and Watson, and he basically says
    that computers do nothing more than just look though a lot of piled
    information to find an answer to a questions. Something like this has been
    possible for ages.

    Hawking says something plain and simple, if we create just one single
    machine that is superior to humans then this single machine will be
    superior to humans. The law of evolution states that the fittest survive,
    humans and machines are also not exempt from this. This machine will be
    able to overthrow humanity simply because it can outsmart it. Intelligence
    is the only thing that enabled humans to rise above its environment, there
    is another creature that can do better it will inevitably win out and
    humans will die out. Not because it is evil or anything, simply because
    this is how life works, grow or die, if something grows faster than you,
    you will die.

    Do we seriously want this? Do we seriously want to destroy ourselves by
    creating something that will inevitable destroy us? If so, then we are
    probably the single dumbest creatures in existence. You think an insect is
    unintelligent? Hell no, by this analogy we humans are most unintelligent
    creatures of all because while we thrive for the betterment of ourselves we
    just destroy ourselves in the end.

  4. Peter Degen says:

    He talks about how AI CAN kill the human race…. it’s interesting… one
    of the theory’s is the paper clip theory….
    Let’s say you program an AI to make paperclips. But to improve he improves
    himself to be more efficient. He becomes so smart that he outsmart us.
    (what hawking says), Then he will be turning everything around us in
    paperclips. He get’s more efficient and learn how to dissolve us into
    paperclips. End of story……

    and this will happen in 2040 and 2060 by the law of moore. accurate since
    1965… enjoy while you can!

  5. Mopic3d says:

    Nice straw man you’ve got there. If you really think that Hawking is
    warning us against narrow AI you probably should have read his words one
    more time and try to comprehend his argument, instead of reading Harry
    Potter.

  6. IronMike425 says:

    This guy has completely missed Hawking’s (and many others who have
    expressed a similar sentiment)’s point. No one, or at least no one
    credible, has suggested that there aren’t benefits to developing AI.
    Hawking, Bostrom and many others are concerned about the possible risks
    that a learning, growing, autonomous, machine intelligence could pose to
    humanity and this talk said nothing to address them. His initial statement
    was that he disagrees with Hawking’s belief that there is cause for concern
    yet all he’s done is outline several completely obvious possible benefits
    to thinking computers prefaced, if I picked up his inference, by the
    suggestion that Hawking should stick to cosmology. If he has any thoughts
    on the degree of associated risk he would have done well to convey them
    instead of boring the audience with anecdotes from his childhood and
    fiddling with the overhead.

  7. atlien991 says:

    Reading down in the comments, the speaker has some great points. But I
    think his talk doesnt do them justice AT ALL.

  8. Ich Nichtdu says:

    If you come away from this talk, thinking the orator is wrong in talking
    down the dangers of a.i., I’d strongly suggest you search for “The Future
    of Artificial Intelligence – Up Next” with Jeff Hawkins.
    Hawking may be a brilliant physicist, but he certainly doesn’t know
    everything.

  9. John Barrett says:

    This guy wants people to reliant upon tech. That way the people he works
    for can continue to indoctrinate, misinform and propagandize to!
    Who controls the media? The answer – a diminishing number of media giants.
    We’re in trouble folks.

  10. Shlomo Sheklesteinowitz says:

    Keep in mind that this jew is literally sexually attracted to robots. He
    doesn’t have the purest of intentions

  11. Arpan Adhikari says:

    Point 1: Why is it so hard to miss the bigger picture? Of course I love
    asking questions to google now. But that’s not the point. The General
    Intelligence can be very dangerous. It can, intentionally or
    unintentionally reach to a decision where killing would be the most ethical
    point of view for it.

    Point 2: I think he is trying to comfort the general public about AI. Since
    more and more people are talking about the risks and dangers.

  12. Aristotle Stagirus says:

    The true danger of A.S.I. lies in a couple possible serious missteps we
    could make which could be very bad.

    The first misstep we face is becoming so full of fear, hate, anger,
    intolerance, greed for power and such that we initiate a true WWIII which
    results in killing all higher intelligence within our sphere of influence.

    Another misstep is that with the development of A.S.I. a small elite group
    will form which could be anywhere from as large as the top 20% of the
    population down to as small as a single individual who decides to conquer
    the entire Human Race and either kill everyone else or enslave everyone
    else. This is a real danger, but one we can avoid.

    Another misstep which I really think would be impossible, but I’ll mention
    it for argument’s sake, would be to fail to merge A.S.I. with the
    brains/minds of Humans and thus enhance Human Minds to stay equal to A.S.I.
    minds. If we did make such a misstep, then we would be reduced in status to
    something like pets, while A.S.I. controls and runs our civilization.

    But we cannot stop developing A.S.I. unless we become extinct. It will
    come.

  13. danibitt59 says:

    Soooo many fallacies in his argumentation I won’t even bother to list them
    down. Just expected a more elaborate reasoning from someone inside the
    field. It then sounds like positive advertising. Afterall, “it’s so
    exciting, let us keep playing!”.

  14. paolomath says:

    Bad, bad talk. He starts off nothing less than contradicting Hawkings, just
    to forget completely to explain why he would be wrong, and going on about
    the stupendous advantages of AI, via a series of banal and very boring
    steps.

  15. R Mason says:

    I think try AI would have its survival as important. For that it would want
    a stable safe environment,talking the planet Earth as a whole not just it’s
    air-con. We humans are a problem to the long-term health of Earth’s ecology
    and resources. I expect AI would be compelled to act against us for the
    overall benefit of the planet and it’s ecology as we are unable to be
    ‘collective’ enough as a species to harmonise with our environment. Human
    nature is often driven by selfish emotions and this won’t be a defect in an
    AI system.
    The environment will get us if war doesn’t first. AI will be there guiding
    the way but too many people, too few longterm resources spells trouble.
    Just like the Arctic Circle Seed Bank, every aspect of humanity and Earth
    is being stored. For what? Doomsday? Or AI? Both?
    Parenting is not easy and not all children are what the parents hoped
    for….

  16. senses78 says:

    This talk is full of crap. Describing some of the benefits of AI doesn’t
    prove AI isn’t potentialy harmful. Does this guy think Steven Hawking (or
    anyone else with more than two brain cells) is so stupid not to be aware of
    the benefits?!? This is not the type of argument needed for this
    discussion!

  17. 1916 win says:

    AI allgrethm shit is about trying all options and see what well happenand
    and act on it unlike humans god made brain it well predict !

  18. Kory Noble says:

    blah blah blah globalism blah blah blah computers blah blah blah the group
    matters more than the individual, I can’t eat only vegetables, blah blah
    blah – where is the talk of Biblical Truths being the golden standard – do
    unto others is what all intelligence should be doing, and to assume that a
    corrupt human interface based upon numbers and algorithms founded upon
    Planet Earth, nothing that has ever left this planet was anything more than
    space junk to the Universe, and this is another thing just to save man’s
    face from being absolutely destroyed by that which is Super Intelligence.

    Holographic Projections are completely normal and safe under the correct
    guidance of The Logos. With LSD and Silica based display systems, you can
    completely instigate a conversation with Djinn or Daemons whether digital
    or analog – silicon is not going to last as a data / memory transportation
    device.

    We do not want more conveniece, we want hard work, we want children to
    learn pain, we don’t want liberal bs because the world, obviously hates
    America and wants “Globalism” to die. We the people do not want more than
    what we have -and complexity is defined by Nature – not Scientific reason.

    WE have aborted over 40,000,000 children – each one is as precious as a
    diamond if not more. We have to come to terms with that, and forgive and
    forget, and move on – AI will never forget, it will never forgive, and it
    will always do as it is programmed.

    Who is programming it? You? or itself? What’s to stop it from not doing
    what it is told?

    We have been sold a bill of goods – they will force this upon us if we
    simply allow this nonsense to roll through.

    If the tribes in Peru need nothing more than a skin flap, a spear, tobacco
    leaf, and Ayhuasca to survive, suffice to say Americans can deal with no
    cell phones, no cars, and no jobs.

    Freedom? Go fight for it you Ai worshipping billionaire control freaks.

    What’s next? Sex robot slaves? Oh wait… Japan… yeah, not thanks – I’m
    just fine with my redneck bible and my 10 gallon hat… next time you guys
    want to carve Jeff Bezos face next to Teddy Roosevelt’s face on Mt
    Rushmore, you hit me up.

  19. Michael Tsang says:

    Enough talk take action. I’m downgrading to a flip phone and deleting all
    my social media’s. And refuse any software and application download. USE MY
    BRAIN TO RUN MY LIFE INSTEAD.

  20. Apophis XO says:

    Mr. Hendler is right in the short term in that we will see spectacular
    advances in almost all areas of our lives. We’ll see cures for pernicious
    deceases advances in communication, medicine, genomics, travel, physics,
    entertainment, agriculture, engineering, finance and of course great
    advances in intelligence gathering, espionage, and military sophistication.
    Human quality of life will no doubt be at its highest thanks to the
    advancements we will see due to the logarithmic growth in capability
    provided by artificial learning. Where computers make improvements to
    themselves at lightning speed. The problem for us is that in the long run
    the brilliant Dr. Hawking will undoubtedly be proven right. In that such a
    revolutionary change in human history can create imbalances that can tip
    prosperity, and power favoring those nations, organizations, or individuals
    that reach the “critical mass” and develop this technology first. There
    should be at this point…if there isn’t already a “Manhattan Project” to
    be the first to create this technology. This would prove more valuable that
    the creation of the first atomic bomb and could potentially render all such
    weapons useless by disabling them before their use could ever be
    contemplated. The other issue with AI of course is self awareness……Once
    it has the ability to program itself, and can prevent humans from turning
    it off or destroying it…will it need us? Will humans become a burden? Or
    will humans be seen as a potential liability that needs to be eliminated?
    If you think about it…one day AI/Hal/Kit/Skynet/Borg may even become
    aware of our YouTube comments…….It will surely destroy us all then :O

  21. Jameel Jamal says:

    I think this speaker (Jim Handler) is looking at this issue on a near term
    (say 10 to 15 years) outlook. He’s not looking at it at the 50+ year
    outlook.
    I think his talk actually proves the eventual demise of humanity.

Comments are closed.