What do each of the Yonder subteams do? In a series of posts, we're giving you a closer look, beginning with our Software/Sensing Team.

MEMBERS: Alex Smith (Lead), Eyvonne Hu, Alex Haggart, Brian Nguyen, Kelvin Sarabia, Si Wu, Cai Yeo

If you're familiar with us, Yonder Dynamics, you may have heard us frequently mention that we aim to create the first fully autonomous rover for the University Rover Challenge (URC), an annual robotics competition for college students. What you might not be familiar with is how we plan to accomplish so lofty a goal.

The answer: our Software/Sensing Team, one of our major secret weapons.

These amazing individuals have been working to develop a "brain" for our rover, so that it will be able to think for itself while out in the Utah desert (a simulation of Mars terrain) during the competition. The URC requires 4 main tasks of each team: Science Cache, Autonomous Navigation, Extreme Retrieval and Delivery, and Equipment Servicing. While the newly added Autonomous Navigation task, which requires teams to have their rovers drive autonomously to a specified locations, is demanded of every team, we plan to have our rover complete not just the Autonomous Navigation task but all 4 tasks on its own, without manual guidance.

Software/Sensing Team set out to find a way for the rover to be able to detect the objects around itself, as only then would the rover be able to determine obstacles to avoid and items that might be necessary for its modular arm to grasp. Using neural networks (a computer system based on the human nervous system), team members began by inputting thousands of images of objects they believed would be relevant for the competition into the computer system. They then utilized filters (“kernels”) of shapes, such as lines and circles, through several layers of the network, so that the system would adapt and “memorize” the appearances of various types of objects. The end result is an onboard computer with a means of determining, with an exact percentage of confidence, the objects that exist in an image.

While objects like rocks and tools may ultimately be pertinent to the URC, we'd like to recognize something a computer system might not pick up on: the brilliance of our Software/Sensing Team.

Before object recognition

Before object recognition

After object recognition 

After object recognition 

What goes into object recognition, with percentages of confidence 

What goes into object recognition, with percentages of confidence