Roomba Roomba Roomba!
My wife Alexandra and I recently bought a Roomba iRobot vacuum cleaner. Ours is Model 690, not Model 980. I’ll try to say something about why that makes a difference below.
Our Roomba, named “Sopapilla II,” took its maiden voyage around the house yesterday, and we were amazed by how well it cleaned. Our hardwood floors, area rugs, Turkish rugs, Saltillo tiles, and other surfaces have never looked or felt so good on bare feet. We plan to let it do its dandiest in our guest quarters as well.
All this is great–but here’s the rub. According to Gary Marcus and Ernest Davis in their excellent book Rebooting AI: Building Artificial Intelligence We Can Trust (2019), deep learning on its own is not going to get us to where we want to go. More pithily, and now famously, put by Peter Thiel: “We wanted flying cars, instead we got 140 characters.”
What? No Driverless Cars! Get Outta Here!
It might help to know that Roomba uses deep learning. In Rebooting AI, Marcus and Davis argue that (1) deep learning has been an important step in the right direction, yet (2) deep learning on its own will not be sufficient to get us what we want and so (3) we’ll need to combine “classical AI” (top-down approaches) with machine learning (bottom-up) approaches.
Why? Take the example of driverless cars. Marcus and Davis point out that an environment like a city is open-ended in the sense that new objects, movements, and scenarios present themselves to us each day. Because it is open-ended, complex, and variable, human beings need to constantly adjust to the liquid situation they find themselves in. But deep learning, on its own, is only able to handle bounded environments. An Arizona highway on a sunny day is not like a New York street in Midtown or near Penn Station. While the former tends to be a bounded, determinate environment, the latter is filled with surprises. And, Marcus and Davis argue, no amount of deep learning on its own will enable an AI to deal, as human beings do, with surprises of the kind to be found in Times Square. For, by definition, a surprise is just the thing that we didn’t see coming; here and now it confounds our understanding and challenges our expectations. In the face of surprise, therefore, we need to adjust our map, our understanding, our expectations accordingly–and often swiftly.
But deep learning, which is somewhat like a blank slate that builds up certain patterns over time due to Big Data, computing power, and algorithms cannot account for, and behave smartly in the face of, unknown unknowns. For Marcus and Davis, this means that deep learning on its own won’t be enough to get us to driverless cars, robot servants, and more.
To handle the complexity that ordinary and extraordinary situations throw at us, they urge, AI will need some top-town reasoning skills, mapping skills, and the like. In philosophical lingo, empiricism will need to reacquaint itself with rationalism; the two will need to get married.
Deep Learning And Classical AI
I got an intuitive sense of the limits of deep learning during our Roomba 690’s maiden voyage yesterday. I’d just gotten out of the shower and, sure enough, it was stumbling and bumbling into the bathroom. Naked, I sat on the counter after the shower until it had cleaned some and bumbled on out of the bathroom. You see that it has no concept of privacy.
According to MIT Tech Review, as of 2015, “The Roomba  now seeks and maps a home.” This is a step in the right direction, and evidently our 690 is behind the times. We read: “The navigation approach used by the 980 is known as simultaneous location and mapping, or SLAM, which means it builds a map as it goes along and refers back to it.” Therefore, as I vaguely understand it, it seems to have powers that are roughly akin to human perception, to human conceptualization, and to some feedback between perception and conceptualization. At least as of 2015, it didn’t (and my hunch is is that it still doesn’t) know how to classify kinds of things such as cats, dogs, and humans. The latter all have properties (is furry, likes to hiss, purrs when petted, etc.) that, if known, would help to determine the robot’s behavior.
Appropriate Technology And A Better Tomorrow
In closing, I’d like to come back to Thiel’s quote: “We wanted flying cars, instead we got 140 characters.” He’s alluding to the juxtaposition of the grand vision of the Space Age (“We wanted flying cars“) with the crappy actuality of social media (“instead we got 140 characters“). His point is that this third AI season isn’t close to being ambitious enough.
While I’m not sure that Thiel’s imagined future resembles my own (I discuss my utopia at the end of this IHMC talk), I do, generally speaking, appreciate his insight, his floating of orthodoxy, and his candor. Therefore, I wonder what appropriate forms of technology could contribute to a post-Total Work world.
What, I ask, would be an appropriate technology, vetted by wisdom, (a) that would enable human beings to diminish the amount of toil they perform regularly, (b) that could free us up so that we could engage with contemplative questions about the nature of reality, and (c) that could also free us up so that we could act decisively and wisely–not just for the sake of homo sapiens but for the sake of all sentient beings–in this time of ecocide?
I welcome any appropriate technology that can be something that would knock us out of our slumber, the slumber that the Armenian mystic Gurdjieff, almost 100 years ago, warned us about.