Neurala, Inc. is a software company that developed The Neurala Brain— deep learning neural network software that makes smart products like inspection cameras, robots, drones, toys, consumer electronics and self-driving cars more autonomous, engaging and useful.
Neurala provides off-the-shelf and customized solutions spanning the world’s highest-end applications to inexpensive everyday consumer products. With the Neurala Brains for Bots Software Development Kit (SDK) and an ordinary camera, products can learn people and objects, recognize them in a video stream, locate them in the video and follow them as they move.
We spoke with CEO and Co-Founder, Massimiliano Versace on Neurala’s history and how he and his team plan to put AI in the palm of your hand.
Heather, Anatoli and myself were Ph.D. students at the Department of Cognitive and Neural Systems at Boston University. We came from far away—Iowa, Russia and Italy, respectively—to study what, at that time, was an obscure art: how to mathematically model brain functions to understand how our minds and behaviors work.
What started as almost a philosophical query rapidly morphed into a scientific and technological one: to be able to test our hypothesis, we needed to simulate the system. Similarly to physics, where you can’t study Jupiter by manipulating it or going to it but by simulating its properties on a computer, we did the same with brains and behaviour, testing our hypothesis. Our main issue was that simulating even a small brain with conventional hardware took just too long.
That’s when we devised a patent to use what, at that time, were barely programmable pieces of hardware—GPUs (Graphic Processing Units)—and got some amazing results. We were able to run our algorithms hundreds of times faster, and we knew we were onto something. Remember: this was a time when nobody cared about GPUs—not even really about AI. But we knew the combination of elements we had in our hands was going to change the world. It since has.
A professor once told me that if I chose to work on something that I was passionate about I would come out on top, no matter the competition. It really comes down to that. Neurala’s founders have had only one passion. We chose to move into artificial intelligence (AI) because we wanted to figure out how to replicate thoughts in machines. We did not move into AI because it was a hot topic and everybody else was doing it.
We also have a vision for the technology that is deeper than many of the AI newcomers. Today, 99.9% of the AI you see out there is fairly uniform, with everybody using almost the same stuff. And there is a sense that everything that can be invented has been invented.
That could not be further from truth. We are only scratching the surface of AI. A good analogy is physics. We thought, after Newton, that we had cracked reality. Then came Einstein and quantum physics, and we realized how naïve and wrong we were. That “moment of Zen” is about to arrive in AI.
Much of our early and recent traction is based on our ability to leverage available hardware in a unique way, and to push it beyond what others do. In 2006, GPUs were not really that programmable. I recall going to a GPU vendor at that time who heard that we wanted to use GPUs for AI and told us they weren’t interested. At that time, the only way to program GPUs was to be fluent with computer graphics. We did not think of gaming applications, per se. We just saw the analogy between an array of pixels and an array of neurons and thought: let’s borrow this hardware for a higher goal. Today, many more options are available, and many more will be soon, so GPUs will eventually become obsolete.
With DARPA and NASA, they wanted to work with people who had expertise in Neural Networks and who knew how to make them work in the most challenging situations. One of the main challenges then, which still goes largely unaddressed in the industry today—Neurala excepted, of course—is how to have a system adapt and learn when deployed in the field.
It’s one thing to build something on a computer with some a priori knowledge of the problem; quite another to put it to work on a mobile robot exploring an uncharted planet. Learning and adaptation to novel information is key. There are orders of magnitude more challenges, and these challenges have shaped our DNA over the many years of working on “mission impossible” projects with DARPA and NASA.
First of all, I want to clarify that Neurala is not involved today in shipping our software on Mars – we have no visibility of what NASA did with our software, and, to tell you the truth, our mandate is to embed our software in Earthly applications and make it work here. As of today, there are no customers on Mars we know about who possess a credit card they can swipe!
We started our journey on very hard use cases, cognizant of the fact that nailing those use cases was going to give us an advantage over people who did not. NASA gave us a great challenge, and when you shoot for the stars, chances are you jump higher than anybody else.
The 2013 TechStars program was the moment where the founders looked at each other and asked: “Are we all in, or all out?” Up until then, we were very much involved in our academic efforts, with a sizeable lab of more than 20 people working on very interesting problems. We sat down and imagined where we wanted to be in the future: doing more of the same or jumping into the next layer of difficulty, where advanced R&D would be married with deployment in tough use cases. We knew of the difficulty we were going to face, but we thought it was worth it.
Cloud is a funny term. To me, it has only ever meant “remote compute and storage power.” As a company, we have always focused on building the smartest and fastest algorithms and placing them where they make the most sense. For many applications, it was the compute Edge—namely, very close to where sensor data and effectors, if any, were sitting. This could be on a drone, mobile robot, or a static camera: computing and learning at the Edge makes a ton of sense, in particular if you look at issues such as response time (collision avoidance relying on cloud computing is suicide); data storage and transmission (computing at the Edge means not needing to do that), and, very importantly today, data confidentiality. This is really crucial, in particular as we are moving to a scenario where customer data gets more and more protected, we have the only AI in the market that allows you to learn on, and then discard, user data.
So, we look at cloud as a complementary system to provide user interface to train and deploy AI and backup brains but not as a way to capture all that is capturable from our customers. We are not Google or Facebook.
L-DNN pushes AI beyond on-device inference and allows smart devices to learn, adapt, and interact in real time without relying on a server. This enables users to protect their personal information, since it never has to leave the device. It also reduces data usage and increases accuracy. This makes a huge difference for the end user: since we don’t require raw data storage, our customers see huge cost savings, a lot of time saved, and greater efficiency.
We’ve chosen to move away from this space to focus on inspections and smart devices. We’re interested in how we can put the power of AI in the palm of your hand.
We are going to see a major shift.
Consumer drones were very popular as a novelty item, but once you buy one or two and you have gone through the adrenaline rush of flying and crashing it everywhere, you are kind of done. Consumer drones that require piloting are going to become obsolete pretty soon, and drone manufacturers will need to invent the next interesting thing to do with them. I am convinced AI will be the tool to make this happen, with AI either embedded in the drone and/or on the controlling device making it perform actions and games that are engaging and novel for the consumer. We are spending a good amount of time partnering with the best drone manufacturers to bring this to reality, and two of these OEMs are planning to ship with our tech onboard in 2018. We are filing patents as well, so you can tell we are convinced this is going to “take off.”
On the professional side, the story is similar. Drones as flying cameras are going to cede the stage to drones that know what they are looking at, whether it’s a construction site, a pipeline inspection, or a cell tower inventory. Initially, the AI will come in post-processing, then move to the Edge, where the drone can not only understand what data it is looking at, but also bias data collection—in other words, look for and analyse in depth-specific areas of interest, pretty much like humans do while inspecting structures. We have several engagements with large enterprises to begin this revolution.
The immediate future is simple: deployment, deployment, and deployment. We are focused on becoming a leading AI company that not only has technology, but that solves massively important – and profitable – customer problems. Our first deployments at scale in products with millions of devices sold worldwide are happening as I type. In the drone market, we are aiming at three major initiatives in 2018: two with consumers, one with enterprise. Beyond 2018, we see Neurala scaling up to become a major international player. Our customers will become pickier—being an AI company in 2006 was unique, but with many companies trend-chasing by putting “AI” in their name, it has become more challenging to get our value differentiation out. So our customers and deployment success will speak for us.
We are entering the era of AI that I would call “great, but now show me.” So many AI companies—some legit, some less legit—claim to have AI, but only a few will be able to support this claim. The year 2018 will be a major milestone with us, a year we will look back at and say “we crossed that threshold.” I wonder what other challenges and thresholds lay ahead of us. But first I would like to take a vacation. Lots of work ahead!
Max has spoken at dozens of events and venues, including a keynote at Mobile World Congress Drone Summit, TedX, NASA, the Pentagon, GTC, InterDrone, National Labs, GE, Air Force Research Labs, HP, iRobot, Samsung, LG, Qualcomm, Ericsson, BAE Systems, AI World, Mitsubishi, ABB and Accenture, among many others. His work has been featured in TIME, IEEE Spectrum, CNN, MSNBC, The Boston Globe, The Chicago Tribune, Fortune, TechCrunch, VentureBeat, Nasdaq, Associated Press and many other media.
He is a Fulbright scholar, holds several patents and two PhDs: Cognitive and Neural Systems, Boston University; Experimental Psychology, University of Trieste, Italy.
Follow Neurala on Twitter at @Neurala, on Facebook at www.facebook.com/neurala, on YouTube at www.youtube.com/c/NeuralaTV or LinkedIn at https://www.linkedin.com/company/neurala.