Funding for robotics startups is on the rise. Last month alone saw an investment of over $500 million in the development of autonomous robots. The Autonomous Mobile Robot (AMR) industry has been particularly hot in the past decade with Locus Robotics becoming a unicorn earlier this year.
Providing development and consultancy services to a spectrum of early-stage businesses in the autonomous robot industry has allowed me to closely observe the challenges that start-ups face while planning their product development cycle. With this article, I aim to highlight four key issues and bottlenecks that most robotics businesses have a tough time circumnavigating.
- Selection and lead times of robotic components will delay your project
- Understand where state of the art is for autonomous robotics
- A working demo does not imply proximity to production readiness
- Robots are not 100% autonomous (yet!)
Alright, lets get into it now.
1. Selection and lead times of robotic components will delay your project
Robots can be incredibly complex at the hardware level itself. Generally, a wide range of specialized components are required to make an autonomous robot. Consider the use-case of widely popular AMRs, even the most basic hardware stack would involve an appropriate selection of motors, motor drivers, a micro-controller, computing board, safety sensors, a LiDAR and possibly one or more cameras. There are a range of vendors for each of these components and yet, there is no one size-fits-all.
For the purpose of illustration, let me discuss a little more about the AMR drive. The motor-driver requirements of a high speed robot sortation system for e-commerce application will be vastly different from that of a 200 kg payload robot for intra-logistics operations. The former may require a higher velocity output with smooth acceleration profile and low power consumption. The latter would probably require high torque at both low and high speeds. In my experience working with leading companies in these domains, the lead times for most parts are in the range of 4–8 weeks.
More than just the specs themselves, there are several other considerations while selecting motors — how much do they weigh? Does it adversely affect vehicle dynamics and require an additional suspension system? Does it alter the form factor and overall aesthetics of the robot? How reliable and power efficient do they make the system? Similar questions can be posed for each sub-components of the robotic system.
Robotic components can rarely be selected in isolation from each other, there are inter-dependencies and a holistic decision is vital for the overall product performance. It is improbable to get all the combinations correct in the first go, so emphasis should be placed on starting with components that satisfy the non-negotiable requirements, pursue rapid prototyping and iterative part selection from that point onward.
2. Understand where state of the art is for autonomous robotics
An overwhelming number of self-driving car companies with impressive pilot projects and a few indoor mobile robotics companies with mature product lines give the perception of certain technological barriers in the autonomous robotics domain to be breached, or nonexistent.
Let me dive into this with an example — localization. Put simply, robot localization is the ability of a robot to identify its own position in the environment that it is operating in. In absence of a global positioning system (GPS), robots rely on building maps of the environment they’ll be operating in (using LiDARs/Cameras) and then use that map as references and on-board sensor readings to estimate their position in those maps. Simple enough? Not really.
Environments tend to be dynamic. Factory floors aren’t always organized and often require restructuring to facilitate a given manufacturing cycle. Trolleys, machines and people often move in and out of robots’ field of view. To make matters more complicated, a lot of indoor mobile robots only use a 2-D LiDAR to map and sense the world, so what they see is just a 2D slice of a plane and not the three-dimensional world that humans perceive. This is just one of several other problem statements that engineers dealing with autonomous robot navigation come across.
Fueled by capital and time investments in product research and development, market leaders in the domain have overcome a lot of these challenges to varying degrees. It is however important to understand that serious intellectual property development takes time and effort by capable and dedicated engineers.
In my experience of observing and working on product development cycles, an important learning has been to make founders and decision makers cognizant of the challenges they are likely to face in their robotics journey. It is vital to understand the underlying complexity so that application use-cases, customers and project timelines can be chosen carefully.
3. A working demo does not imply proximity to production readiness
Open-source robotics and software stacks such as move_base have made robotics development easy and accessible for all. It is now trivial to achieve basic point to point navigation with some degree of obstacle avoidance. Buoyed by early trials and results, it is unfortunately common among robotics startups to rely on these demo setups and extrapolate their development timelines into production readiness.
There are several questions to ask and tests to perform before one places the production eggs in the open-source basket. Here are some of them:
- Does the robot perform as you want it to in a customer environment?
- Can you capture and understand all modes of algorithmic/software failures?
- Can you understand the software APIs enough to make the necessary state-machine transitions, user interface and the overall robot application out of these open-source implementations?
My first takeaway from implementing and integrating open-source software while making robot applications was to understand that it is not a monolithic blob. There are several individual implementations for specific requirements — SLAM: Lidar and camera based, re-localization, global planning, local planning, coverage planning, so on and so forth. Some of these implementations are mutually compatible out of the box, others take more integration effort. Each implementation has a certain target use case and maturity towards handling that use case. Careful study must be performed to understand which approach helps you the most with your specific problem statement and at what point you must write your own module/stack to solve your unique problems the best.
4. Robots are not 100% autonomous (yet!)
Unlike traditional industrial robots, modern day autonomous robots are collaborative in nature i.e. they work and exist in a workspace with humans. Their objectives are more higher level such as — clean a floor, move material around, mow lawns etc. This is in contrast to traditional robots whose tasks can be programmed in terms of closely spaced trajectory points or motor angles well in advance.
As these robots are made to perform tasks that get more complex, so does the underlying software engineering. While research institutes and companies constantly work towards making the existing technology more robust, collaborative robots are still prone to failures. Startups in particular have the difficult task of juggling tight delivery timelines, arduous product development curve and limited resources to oversee all of this.
As a result start-ups often have to take the calculated risk of releasing a minimum viable product knowing that it is still open to several upgrades and refinements. It is still important to address two main issues in such a scenario,
- Knowing the limitations of the technology in place, and the ability to deterministically predict the use cases which the robot is not ready for. This can be conveyed to customer apriori so that it results in fewer surprises for the end user.
- A lot of underlying algorithms in robotics are probabilistic and not all fail cases can be captured in advance. However, the software should be packaged in a way that allows users to mitigate worst case scenarios. As an example, for any reason if the robot malfunctions and fails to accomplish its targets — the goal should be to minimize the downtime by either a quick reset, or allowing manual intervention to oversee task completion.
Gradually, as the technology catches up with more and more use cases, the need to develop and prepare for fall-backs will be minimized.
Thank you for reading this article. I hope this helps you plan your journey into robotics better.