The Intersection of Robotics & AI


Bill Studebaker:

Good afternoon. I’m Bill Studebaker, president and CIO of ROBO Global and I have with me today Dr. Wyatt Newman, who is a ROBO Global advisor and importantly has been researching the fields of robotics and AI for the better part of the last three decades. And in doing so, during his tenure, he has been issued a multitude of patents and published over 150 publications while teaching at Case Western University and more recently has co-founded a robotics company called RoadPrintz that we’ll discuss. Welcome, Wyatt.

 

Wyatt Newman:

Thank you, Bill. Good to be here.

 

Bill Studebaker:

Great. Well, today we’d like to kind of discuss the intersection of robotics AI and as an advisor to the ROBO Global ETF funds, Wyatt provides our team really with pivotal insight into the evolution and commercialization of these technologies.

First question I have, Wyatt, is, as a robotics engineer that’s really been at the center stage of developing this industry for the better part of the last three decades, I think it will be helpful to get your insight and perspective as to the renaissance that we’re seeing in the industry and the intersection and the importance of robotics and AI coming together.

 

Wyatt Newman:

Yes, thanks. I do feel strongly about this. I’ve been involved in robotics, as you’ve noted, for at least three decades. Robots have been around for quite a while. Actually General Motors installed the first industrial robot from Unimation in 1959. So we’ve had robots around for over 60 years. So that in itself is not new.

We have seen impacts of robots in manufacturing. In fact, manufacturing as a share of GDP in the US has been relatively constant, but employment in manufacturing in the US is less than half of what it was in 1960. So, clearly robotics and automation have had a big impact on manufacturing, but over 60 years, so it’s been quite incremental.

The new thing that’s happening now, in what is broadly construed as the Fourth Industrial Revolution, is the intelligence of robots that really has held robots back. All of the peripherals that you need, the long time it takes to program them, that has held back robotics and there have been dramatic advances in artificial intelligence in recent years and that’s going to have proportionate impact on robotics as well.

 

Bill Studebaker:

Wyatt, I guess the new buzz in the field of robotics and AI today is OpenAI’s new GPT-3, which is a new language prediction model that uses the internet really to generate any type of text, and in my opinion, it’s kind of an unexpected step towards machines that can really almost understand anything. It’s going to shape, I think, a lot of industries and a lot of our capabilities and I’m just curious as to your interpretation of this recent success, and again, how this may affect robots and its capabilities?

 

Wyatt Newman:

Right. In fact, it might not be obvious immediately of how some of these natural language processing systems will have an impact on robots, but they will. We’ve become used to things like Siri and Alexa responding to our voice commands and they’re becoming increasingly intelligent, but also we’ve seen in the news some of the dramatic increases in natural language processing from projects like GPT-3.

And I brought a bit that I’d like to read, a quick excerpt from an article last month in the New York Times that was entitled A Robot Wrote This Book Review. So here goes a little bit of that. This is text that was generated by GPT-3. All right, so no human wrote this. It was told write a book review about this book on artificial intelligence and it came back with, “The authors have examined the full range of AI technologies, from computer vision systems to natural language processing and written about them in a way that will appeal to both experts and laypeople. One of the most inspiring aspects of this book is its scope. The authors delved deeply into the potential of AI in all areas of human enterprise. They describe the impact of AI on healthcare, economics, geopolitics, law, urban development, governance, journalism, the military and even the life sciences.”

And it goes on for several more paragraphs. It is an astonishing writing. There was no place it could have looked this up, rather, it had to put together a novel description on command of a novel book and it said, “Write a book review of this.” It did benefit from much that has been online. These things are trained by being able to scrape the internet, get lots of examples, which basically is teaching it how to read, but it has been able to come up with, I would describe that as sophisticated, erudite. It was relevant. It was informative.

Not only was the grammar and the vocabulary perfect, but it was appropriate to the context. It said write review about this book and then it did. Which is very tempting to say, “Did GPT-3 actually understand the book?” Now, this is a word we seldom use or perhaps never use with computers. We don’t accuse them of understanding anything, but it’s getting awfully close. It’s a little hard not to say.

Another example, just last October, is a robot from Oxford University addressed the English parliament. And it wasn’t just a recording. Okay, here’s a robot going through recorded motions. No, it did question and answer on the topic of technology and art. Now, this was a little closer to robots with embedded intelligence because in fact it was a machine that had a head, had sensors, and it had arms and it’s able to paint.

All right, so this is a machine where they have taken the intelligence of natural language processing and put it into the robot, but the robot also generates art forms. So that’s using the same technology, deep learning, that’s in the natural language processing applications, but applying it in different domains. And in this case, specifically, to be able to make art in doing strokes that will create something physical. And that’s where the crossover is going to be between what we’re seeing from the natural language processing and what we can expect out of future robots.

I like to reiterate, it’s not a matter of looking up answers. We’re used to that. You go online and you do a search and you find out something about a fact that you’re looking for. No, it has to take relevant facts within context and craft intelligent responses, novel ones. So it’s not that natural language processing in itself is going to be transformative for robots. The connection instead is what’s under the hood. It’s the underlying technology, specifically deep learning, which allows it to become more competent through experience and that experience can either be through personal experience, the robot itself, or as in the case of GPT-3, borrowed experience from the internet. So this is how I expect we’re going to see an impact of AI spillover into robots.

 

Bill Studebaker:

So, Wyatt, we’re talking about robots that now are creative. I guess that was somewhat thought would’ve been impossible years ago. I’m curious as to your kind of interpretation of this alongside of that with ROS and maybe you could educate the viewers about robot operating systems and the importance of that evolution and GPT-3 in terms of the capabilities and the learning blocks that are put in place are really now allowing developers to start up robotic companies for pennies on the dollar, where it would’ve cost significantly more and it’s going to help expedite the advancement of the technology.

 

Wyatt Newman:

Right. That’s a key point and I do think that the robot operating system in specific has had a huge impact on the transformation of robotics. Historically, in case after case, the cost of developing software has been grossly underestimated. There were many startups that went through saying, “Oh, we’re going to make this type of robot.”

They get into their software development and years later they find that they’re out of money and they don’t have something working. It’s the robot operating system as well as the open source movement that has enabled building up a foundation which can be borrowed and immediately built on like Lego building blocks. Be able to take these capabilities and you know that they’re already good, you know they’re already vetted.

Some of the best, smartest experts in the world have focused on making some of these pieces and you can take that work and incorporate it into novel systems. So this is pre-competitive technology, but it allows you to then make what is competitive technology that’s in the marketplace and use it. So it’s a tremendous advantage. Now you can, with confidence, say, “Yeah, I’m going to make a novel robotic system and I know how to do it because I’m going to use these ROS Blocks.” That’s been huge, I think, for the resurgence of robotics.

 

Bill Studebaker:

Certainly these days, Wyatt, robots are becoming smarter and more efficient with the help of computer science. I would like for you to provide some insights into some examples where AI is being applied to robots and maybe the evolution of what was presumed to be weak AI versus strong AI. We’re getting to systems that now have much more intelligent capabilities, so maybe a little distinction there would be helpful.

 

Wyatt Newman:

Okay. I’d like to point to a specific example that we’re all becoming familiar with of how we fold AI into robots. For the most part, historically, robots have been, “Oh, I hard coded this to go do a specific behavior and then my AI does something else entirely.” It’s question and answer, perhaps. But an excellent example of the blending of the two, of which we’ll see a lot more, is in autonomous vehicles.

With the autonomous vehicles, really the hope for them coming to fruition is with AI brains and specifically the deep learning. A huge part of that is understanding images. So when the vehicle, with its cameras, has collected scenes around it, it needs to, and again, I’ll use the word understand, but it needs to understand the context. It needs to be able to say, “These are lines of the road. That’s a pedestrian over there. Here’s an obstruction. This is a work zone. There’s a car stopped in front of me.”

All of that has to be interpreted from seeing and this has been traditionally very difficult to machine vision. And deep learning is completely taken over the field and it’s made autonomous vehicles much closer to reality. Well, we’re not at level five autonomy. You can’t just say, “Okay, take me home and go to sleep.” There may be anecdotes of that, but it’s still a bad idea. But there are spinoffs along the way. We have driver assist that’s really very intelligent, lane drift control, where it may automatically start braking when it sees something in front of you.

So we’re seeing some spinoffs that do not require a hundred percent foolproof autonomy. And it may be decades before we get to full autonomy of level five, but we’re already seeing spinoffs along the way. Now, we can expect the same type of revolution with other robots, not just cars. We can expect that machines are going to be incrementally more intelligent and we don’t have to wait for them to become foolproof. We don’t have to wait for them to be completely autonomous.

Notably, the machine vision that is being developed for autonomous vehicles has immediate applications to robotics. All right. The vision is the primary sensor that robots use. Understanding the scene is very important. If you do understand the scene, then you can deal with many other disruptions that come up. You can interpret them. You don’t need expensive peripherals around your robot. You can set things out loosely without precise fixturing. So your changeover can be much faster and we’ll get advantages from the developments in autonomous vehicles that will spill over to robots, and I expect to see it in industrial robots, in medical robots, in service robots.

In all cases where you do not have a structured environment where you want your robot to be able to truly understand what’s around it and make good choices.

 

Bill Studebaker:

Wyatt, how would you compare the importance or the evolution of this trend in collaborative robots? I think the interesting thing from our vantage point that we talk about a lot at ROBO is that robots really aren’t stealing our jobs. If they are, they’re doing a bad job of it. You made the comment before that the evolution of agricultural robotics, back in 1900s, 40% to 60% of our workforce was in ag. Now it’s 2%. We’re doing more with less. And I think the notion that that robots are going to take all of our jobs is quite unfounded. I’m curious on your interpretation of that and how robots really are changing the way we live and work.

 

Wyatt Newman:

Well, certainly much of our standard of living comes from productivity. Productivity means that we have tools that help us do more with less, with less labor, in particular. And robots are doing that. AI is doing that. We’ve seen that as, for example, in automated textile mills were coming in, there was the fear that all of these people doing hand weaving were going to be out of work, when in fact the industry increased its volume, people got paid better, the employment went up.

There are specific jobs that go away. We no longer have rooms full of people doing addition. Calculators and spreadsheets displaced that. There will be disruptions and transitions like that, but ultimately the addition of AI and robots are going to augment what we do. Like cobots, for example. It’s another tool and it can work synergistically with people to make us more capable and more productive.

 

Bill Studebaker:

Yeah, I think what’s interesting about robots, and maybe you can talk a little bit about the cost of collaborative robots and maybe the integration costs. Four or five, 10 years ago, it would’ve cost hundreds of thousands of dollars to put a system in place, or very likely a lot more when you add in the integration costs. It could be four to five times that of the robot. So I’m curious as to what you’re seeing and then some industries where collaborative robots are being applied.

 

Wyatt Newman:

Yeah, I would agree with your historical estimate of the cost of peripherals for the robot. Your robot during that period may have cost $100,000 and then figure five times that for all of the peripherals to feed the robot, ’cause you had to structure the environment. You needed foolproof feeders with high precision. You needed to bring everything to the robot. It was quite an investment to bring in that automation and the robot ended up being a relatively small part of it.

Now your robot may cost $40,000 for a fairly large robot, $20,000 for a tabletop. And with its increasing competence, you don’t need so much in the peripherals around it. The more it can understand on its own, the better off you are. And also, importantly, you need less time to program it. An example that I would bring in is a recent startup company called Path Robotics. They do robotic welding. They’re using deep learning for the image interpretation of novel parts, and the humans coach them through how to do it without doing any programming at all.

This allows the symbiosis of the human and the robot to bring in their relative advantages in order to get things up and running faster. So much cheaper to get that robot up in terms of the peripherals. You don’t need the precise fixturing. And in terms of the programming, you don’t need complex code that you hammer out. There are other similar cases where you’d say robot plus human is a winning combination, so with the cobots.

A robot will never forget where its parts come from. It’s not going to accidentally get hypnotized and go and reach into the wrong bin and put the wrong part in, put the wrong resistor into a place where it belongs. But humans still have advantages on one end with fine motor skills. So if there are parts of an assembly that are difficult to do, the human can work alongside the robot, but more importantly, the human is good at recognizing when something’s not right. When something went wrong. “That board doesn’t look quite right. That part looks funny. It doesn’t feel the way that it should.”

Humans are good at recognizing something is wrong and good at troubleshooting it. So that combination of putting the much higher level intelligence of the human together with the robot is having a lot of industrial impact right now with cobots.

 

Bill Studebaker:

Well, I guess what’s really exciting for us is that the speed of computing is basically doubling every 18 months and the cost of computing is obviously plummeted and it does so every year to two years anyways. And so this is now creating an array of use cases that, a few years ago for most applications, was just Elon Musk science fiction. And sort of fast forwarding 10 years later, where do you think this industry can be going? With the collaboration that’s happening, it sort of seems like there is just untouched territory here.

 

Wyatt Newman:

Well, yes. So we mentioned a couple of the drivers of this Fourth Industrial Revolution for robots. The existence of open source code that we can use as building blocks to get up and running rapidly so you don’t get lost in the black hole of software development as well as the artificial intelligence, which is now handling complex things like machine vision. So that’s a big deal. The increase in the computational power is also a benefit. Some of the AI requires some pretty heavy number crunching, and so the fact that our computational capabilities increasing as well helps to carry the rest of it.

I also look forward to when we have quantum computing, which is going to be another nice jump. That may be a ways away, but we’re still enjoying benefits now of faster computing, especially in the graphical processing units and GPUs.

 

Bill Studebaker:

I think that’s funny. A good segue to get a little introduction to the company that you recently just founded. RoadPrintz, I think, is a good illustration of collaboration with a robot and a person that is performing work that really you can characterize is still dangerous and dirty and it’s creating a whole new application. So that’s exciting for us with the field of robotics and AI is that virtually every industry can be refined and there isn’t an area that can’t use improvement. So I’m just curious on your company, and maybe you could tell us a little bit how you got the idea of starting this and what the technology is?

 

Wyatt Newman:

Yes, thanks. Yeah, I’m pretty excited about it and it is a good example of the breadth of applications that are becoming capable now. The work involved in putting on what’s called transfer symbols on pavement really hasn’t changed over the last a hundred years. I have some photographs that show a road crew from a hundred years ago and a picture that I took from last year, you put them side by side, you couldn’t tell the difference really. It’s still done with hauling out large plywood templates. You put them down on the road and you paint over them and then you wait for it to dry and you put them back in the truck and drive off. Same way it was done before.

My founding partner and I had an instance of … He was involved in designing a new streetscape in his town and they ditched it because of the cost of the painting. And the painting for long lines is pretty well handled. That’s pretty well automated, but all of the other transverse symbols … You want bike lanes? You want cross-hatching? And then the symbols we’re used to. The turn arrows, the crosswalks, the stop lines, lettering, like school zones, all done by hand.

So these are road crews, typically three to five people out there with a pair of trucks and they set out cones and direct traffic and muscle around these big templates and frequently get hit. So there’s a high injury and death rate of people on the street painting streets. So that also was an inspiration. What we’ve put together is a large robot in the back of a truck and the driver goes to a work zone and tells the robot what to paint.

And in telling the robot what to paint, it’s really very heuristic. You drag around on a touch screen what are electronic templates or stencils, and you say, “I want to put this here.” While looking at a view of the street from a camera, you can drag the symbols around and say, “Turn arrow goes there. Paint.” So the human is still involved in making the choices. Says, “This is what I need and this is where I need it and it’s safe to do it now.” That’s all hard stuff to automate, but as far as saying, “Paint this here,” that’s something a robot can do easily.

So it’s an example again of a human and robot synergism. Human stays inside the truck, doesn’t get hit by the traffic, the robot does what it does well, doesn’t get exhausted, so in combination, it will be able to be more efficient, reduce labor, reduce casualties, and it’s a good example of the synergism between humans and robots as well as, as you’d mentioned ROS before, it’s all ROS-based, so that allowed us to get up and running really in record time.

 

Bill Studebaker:

Well, that’s great, Wyatt. Listen, Wyatt, we certainly appreciate your thoughts in the industry. It’s our opinion that this is one of the most important trends of our lifetime. We understand there’s a lot of cross-currents in the market and a lot of things have been shot, put up, baby with the bathwater here, robotics and AI included. But I’m just curious as to how you would summarize the evolution of the growth that we should expect to see. We think, in our opinion at ROBO Global, this is going to be one of the main focus sectors as we evolve and come through this economic situation and the robots, in our opinion, are here and they’re only going to grow in importance.

 

Wyatt Newman:

Yeah, thanks. I think some highlights or takeaways out of our discussion today. First, we’ve all seen the dramatic breakthroughs recently in AI. The fact that it’s mostly a natural language processing, it really is just a matter of how digestible that is commonly, right? Everybody understands speech and context and understanding, but what’s important is what’s under the hood. The same technology will apply to robots doing physical things. Putting parts together, doing painting, doing manufacturing operations in space, surgery, service robots. Also, important is that we’ve seen that we don’t have to wait for it to be perfect to have immediate benefits.

The advances right now can be realized as leveraged by human-robot collaboration, so thus the cobots. If the robot is smarter, it can do more of it. You fill in the rest of it with the human supervisor. So the supervised to autonomy or cobot collaboration allows us to get immediate benefits out of the current advances in AI. So because of that, we can expect that these rapid advances we’re seeing in AI will be matched by corresponding rapid growth and robotics. So it’s a very exciting time for robots.

 

Bill Studebaker:

That’s great. Well, Wyatt, thank you so much for your time and we look forward to having you back on soon.

 

Wyatt Newman:

Great. Thanks a lot, Bill.



Leave a Reply

Your email address will not be published. Required fields are marked *