jump to navigation

Athena: An FPGA-based UGV from Olin College May 3, 2010

Posted by emiliekopp in labview robot projects.
Tags: , , , , , ,
3 comments

Meet Athena, a UGV designed by students at Olin College to compete in the 2010 Intelligent Ground Vehicle Competition (IGVC).

Athena avoids obstacles using an NI SingleBoardRIO, an FPGA-based embedded processor. Unlike processors, FPGAs use dedicated hardware for processing logic via a matrix of reconfigurable gate array logic circuitry and do not have an operating system.

Since FPGAs are simply huge fields of programmable gates, they can be programmed into many parallel hardware paths. This makes them truly parallel in nature so different processing operations do not have to compete for the same resources. Programmers can automatically map their solutions directly to the FPGA fabric, allowing them to create any number of task-specific cores that all run like simultaneous parallel circuits inside one FPGA chip.

This becomes very useful for roboticists. For anyone programming sophisticated algorithms for autonomy, FPGAs can make a netbook look like an Apple II. Granted, FPGAs are not the easiest embedded processing solution to master, assuming you don’t have an extensive background in VHDL programming.

However, the students at Olin College have taken up LabVIEW FPGA, which allows them to program the FPGA on their sb-RIO using an intuitive, graphical programming language; VHDL programming not necessary.

As a result, they can run their algorithms super fast; incredibly fast; and the faster your robot can think, the smarter your robot can become.

Here’s what Nick Hobbs, one of Athena’s builders had to say:

The cool thing about this is we’re processing LIDAR scans at 70Hz. That means in 1/70 of a second we’re evaluating 180 data points effects on 16 possible vehicle paths. This is super fast, super parallel processing of a ton of data that couldn’t happen without NI’s FPGA. Oh, and naturally, all programmed in LabVIEW!

It’s making more sense on why they named their robot Athena; she’s one smart robot. I’m looking forward to seeing more from Nick’s team. Check out more videos on their YouTube Channel.

For more info on FPGA-level programming and other multicore solutions, check out this white paper.

Advertisements

RoboCup 2009: Robot Rescue League Team Spotlight February 25, 2010

Posted by emiliekopp in labview robot projects, Uncategorized.
Tags: , , , , , ,
1 comment so far

Get up close and personal with RoboRescue Team FH-Wels, from the University of Applied Sciences Upper Austria. These students and researchers have an impressive resume of participating and winning in a variety of worldwide robotic competitions.

Their latest success: building an autonomous robot to compete in the 2009 RoboCup Rescue League, a competition where autonomous robots navigate through a small-scale obstacle course of complex, unstructured terrain in search for victims awaiting rescue.

Their white paper is extremely informative, providing a breakdown of the hardware and software design. The team wisely chose commercial, off-the-shelf (COTS) technologies for their robot design, including a notebook PC and haptic joystick for the command station, a D-Link router for communications, an NI sb-RIO for the onboard processing, a Hokuyo 2-D laser range finder for mapping, an Xsens IMU for localization, an NI Compact Vision System for image processing, and lots more. To piece it all together, they used LabVIEW for software programming.

One blog post wouldn’t do them justice, so I figured just embed their white paper. It serves as an excellent reference design for anyone building an UGV for search and rescue applications.



And here’s a video of their robot in action:

LabVIEW Robotics Connects to Microsoft Robotics Studio Simulator January 27, 2010

Posted by emiliekopp in labview robot projects.
Tags: , , , , , , , , , , , ,
7 comments

Several people have pointed out that Microsoft Robotics Developer Studio has some strikingly familiar development tools, when compared to LabVIEW. Case in point: Microsoft’s “visual programming language” and LabVIEW “graphical programming language;” both are based on a “data flow” programming paradigm.

LabVIEW Graphical Programming Language

MSRDS Visual Programming Language

There’s no need for worry, though (at least, this is what I have to keep reminding myself). National Instruments and Microsoft have simply identified a similar need from the robotics industry. With all the hats a roboticist must wear to build a complete robotic system (programmer, mechanical engineer, controls expert, electrical engineer, master solderer etc.), they need to exploit any development tools that allow them to build and debug robots as quickly and easily as possible. So it’s nice to see that we’re all on the same page. 😉

Now, both LabVIEW Robotics and MSRDS are incredibly useful robot development tools, each on its own accord. That’s why I was excited to see that LabVIEW Robotics includes a shipping example that enables users to build their code in LabVIEW and then test a robot’s behavior using the MSRDS simulator. This way, you get the best of both worlds.

Here’s a delicious screenshot of the MSRDS-LabVIEW connectivity example I got to play with:

How it works:

Basically, LabVIEW communicates with the simulated robot in the MSRDS simulation environment as though it were a real robot. As such, it continuously acquires data from the simulated sensors (in this case, a camera, a LIDAR and two bump sensors) and displays it on the front panel. The user can see the simulated robot from a birds-eye view in the Main Camera indicator (large indicator in the middle of the front panel; can you see the tiny red robot?). The user can see what is in front of the robot in the Camera on Robot indicator (top right indicator on the front panel) . And the user can see what the robot sees/interprets as obstacles in the Laser Range Finder indicator (this indicator, right below Camera on Robot,  is particularly useful for debugging).

On the LabVIEW block diagram, the simulated LIDAR data obtained from the MSRDS environment is processed and used to perform some simple obstacle avoidance, using a Vector Field Histogram approach. LabVIEW then sends command signals back to MSRDS to control the robot’s motors, and successfully navigates the robot throughout the simulated environment.

There’s a tutorial on the LabVIEW Robotics Code Exchange that goes into more detail for the example. You can check it out here.

Why is this useful?

LabVIEW users can build and modify their robot control code and test it out in the MSRDS simulator. This way, regardless of whether or not you have hardware for your robot prototype, you can start building and debuging the software. But here’s the kicker: once your hardware is ready, you can take the same exact code you developed for the simulated robot and deploy it to an actual physical robot, within a matter of minutes. LabVIEW takes care of porting the code to embedded processors like ARMs, RT OS targets and FPGAs so you don’t have to. Reusing proof-of-concept code, tested and fined-tuned in the simulated environment, in the physical prototype will save the developers SO MUCH TIME.

Areas of improvement:

As of now, the model used in the LabVIEW example is fixed, meaning, you do not have the ability to change the physical configuration of actuators and sensors on the robot; you can only modify the behavior of the robot. Thus, you have a LIDAR, a camera, two bumper sensors and two wheels, in a differential-drive configuration, to play with. But it’s at least a good start.

In the future, it would be cool to assign your own model (you pick the senors, actuators, physical configuration). Perhaps you could do this from LabVIEW too, instead of having to build one from scratch in C#. LabVIEW already has hundreds of drivers available to interface with robot sensors; you could potentially just pick from the long list and LabVIEW builds the model for you…

Bottom line:

It’s nice to see more development tools out there, like LabVIEW and MSRDS, working together. This allows roboticists to reuse and even share their designs and code. Combining COTS technology and open design platforms is the recipe for the robotics industry to mirror what the PC industry did 30 years ago.

National Instruments Releases New Software for Robot Development: Introducing LabVIEW Robotics December 7, 2009

Posted by emiliekopp in industry robot spotlight, labview robot projects.
Tags: , , , , , , , , , , , ,
2 comments

Well, I found out what the countdown was for. Today, National Instruments released new software specifically for robot builders, LabVIEW Robotics. One of the many perks of being an NI employee is that I can download software directly from our internal network, free of charge, so I decided to check this out for myself. (Note: This blog post is not a full product review, as I haven’t had much time to critique the product, so this will simply be some high-level feature highlights.)

While the product video states that LabVIEW Robotics software is built on 25 years of LabVIEW development, right off the bat, I notice some big differences between LabVIEW 2009 and LabVIEW Robotics. First off, the Getting Started Window:

For anyone not already familiar with LabVIEW, this won’t sound like much to you, but the Getting Started Window now features a new, improved experience, starting with an embedded, interactive Getting Started Tutorial video (starring robot-friend Shelley Gretlein, a.k.a. RoboGret). There’s a Robotics Project Wizard in the upper left corner that, when you click on it, helps you set up your system architecture and select various processing schemes for your robot. At first glance, it looks like this wizard is best suited for when you’re using NI hardware (i.e. sbRIO, cRIO, and an NI LabVIEW Robotics Starter Kit), but looks like in future software updates, it might include other, 3rd-party  processing targets (perhaps ARM?)

The next big change I noticed is the all-new Robotics functions palette. I’ve always felt that LabVIEW has been a good programming language for robot development, and now it just got better, with several new robotics-specific programming functions, from Velodyne LIDAR sensor drivers to A* path planning algorithms. There looks to be hundreds of new VIs that were created for this product release.

Which leads to me to the Example Finder. There’s several new robotics-specific example VIs to choose from to help you get started. There’s some examples that help you connect to third-party software, like Microsoft Robotics Studio or Cogmation robotSim. There’s examples for motion control and steering, including differential drive and mechanum steering. There’s also full-fledge example project files for varying types of UGV’s for you to study and copy/paste from, including the project files for ViNI and NIcholas, two, NI-built demonstration robots. And if that’s not enough, NI has launched a new code exchange specifically for robotics, with hundreds of additional examples to share and download online. ( A little birdie told me that NI R&D will be contributing to the code available on this code exchange in between product releases as well.)

This is just my taste of the new features this product has. To get the official product specs and features list, you’ll have to visit the LabVIEW Robotics product page on ni.com. I also found this webcast, Introduction to NI LabVIEW Robotics, if you care to watch a 9 minute demo.

A more critical product review will be coming soon.

Looks like the robot revolution has begun.

Open Source LabVIEW Code: LIDAR Example featuring Radiohead September 15, 2009

Posted by emiliekopp in code, labview robot projects.
Tags: , , , , , , , , ,
1 comment so far

This one is compliments of Alessandro Ricco, a LabVIEW Champion in Italy who had some free time on his hands. He was inspired by Radiohead’s music video for House of Cards. Some of you may recall, this is the music video that was created without any filming, whatsoever. Rather, the producers recorded 3D images of Thom Yorke singing the lyrics with a Velodyne LIDAR sensor and then played back the data, in sync with the song.

I mentioned LIDAR technology before when describing the Blind Driver Car from Virginia Tech. There’s a ton of other robots that I’ve come across that use LIDAR for sensing and perception, so I figured you robot builders out there might be interested in getting your hands on some code to do this yourself.

Start by downloading Alessandro’s example here. You’ll need LabVIEW 8.5 or newer. If you don’t have LabVIEW, you can download free evaluation software here (be warned, it might take some time to download).

You’ll also need to find yourself some LIDAR data. If you don’t have a $20,000 LIDAR sensor lying around the lab, you can simply download the LIDAR data from the Radiohead music video from Google Code.

On the other hand, if you do have a LIDAR lying around and let’s say you want to create your own music video (or perhaps more likely, if you just want to create a video recording of the 3D data your mobile robot just acquired), Alessandro also includes a VI that saves each 3D plot as a .jpeg and then strings them all together to create an .avi. Here’s where you can find the necessary IMAQ driver VIs to do this part (be sure to download NI-IMAQ 4.1, I don’t think you’ll need the other stuff).

radiohead

Big thanks to Alessandro. His instructions, documented in his VIs, are exceptional.

Blind Driver Challenge from Virginia Tech: A Semi Autonomous Automobile for the Visually Impaired September 1, 2009

Posted by emiliekopp in industry robot spotlight, labview robot projects.
Tags: , , , , , , , , , ,
5 comments

How 9 mechanical engineering undergrads utilized commercial off-the-shelf technology (COTS) and design HW and  SW donated from National Instruments to create a sophisticated, semi-autonomous vehicle that allows the visually impaired to perform a task that was previously thought impossible: driving a car.

Greg Jannaman (pictured in passenger seat), ME senior and team lead for the Blind Driver Challenge project at VT, was kind enough to offer some technical details on how they made this happen.

How does it work?

One of the keys to success was leveraging COTS technology, when at all possible. This meant, rather than building things from scratch, the team purchased hardware from commercial vendors that would allow them to focus on the important stuff, like how to translate visual information to a blind driver.

example of the information sent back from a LIDAR sensor

example of the information sent back from a LIDAR sensor

So they started with a dune buggy. They tacked on a Hokuyo laser range finder (LIDAR) to the front, which essentially pulses out a laser signal across an area in front of the vehicle and receives information regarding obstacles from  the laser signals that are bounced back. LIDAR is a lot like radar, only it uses light instead of radio waves.

For control of the vehicle, they used a CompactRIO embedded processing platform. They interfaced their sensors and vehicle actuators directly to an onboard FPGA and performed the environmental perception on the real-time 400 Mhz Power PC processor. Processing the feedback from sensors in real-time allowed the team to send immediate feedback to the driver. But the team did not have to learn how to program the FPGA in VHDL nor did they have to program the embedded processor with machine level code. Rather, they performed all programming on one software development platform; LabVIEW. This enabled 9 ME’s to become embedded programmers on the spot.

But how do you send visual feedback to someone who does not see? You use their other senses. Mainly, the sense of touch and the sense of hearing. The driver wears a vest that contains vibrating motors, much like the motors you would find in your PS2 controller (this is called haptic feedback, for anyone interested). The CompactRIO makes the vest vibrate to notify the driver of obstacle proximity and to regulate speed, just like a car racing video game. The driver also wears a set of head phones. By sending a series of clicks to the left and right ear phones, the driver uses the audible feedback to navigate around the detected obstacles.

The team has already hosted numerous blind drivers to test the vehicle. The test runs have been so successful, they’re having ask drivers to refrain from performing donuts in the parking lot. And they already have some incredible plans on how to improve the vehicle even further. Watch the video to find out more about the project and learn about their plans to further incorporate haptic feedback.