jump to navigation

Athena: An FPGA-based UGV from Olin College May 3, 2010

Posted by emiliekopp in labview robot projects.
Tags: , , , , , ,
3 comments

Meet Athena, a UGV designed by students at Olin College to compete in the 2010 Intelligent Ground Vehicle Competition (IGVC).

Athena avoids obstacles using an NI SingleBoardRIO, an FPGA-based embedded processor. Unlike processors, FPGAs use dedicated hardware for processing logic via a matrix of reconfigurable gate array logic circuitry and do not have an operating system.

Since FPGAs are simply huge fields of programmable gates, they can be programmed into many parallel hardware paths. This makes them truly parallel in nature so different processing operations do not have to compete for the same resources. Programmers can automatically map their solutions directly to the FPGA fabric, allowing them to create any number of task-specific cores that all run like simultaneous parallel circuits inside one FPGA chip.

This becomes very useful for roboticists. For anyone programming sophisticated algorithms for autonomy, FPGAs can make a netbook look like an Apple II. Granted, FPGAs are not the easiest embedded processing solution to master, assuming you don’t have an extensive background in VHDL programming.

However, the students at Olin College have taken up LabVIEW FPGA, which allows them to program the FPGA on their sb-RIO using an intuitive, graphical programming language; VHDL programming not necessary.

As a result, they can run their algorithms super fast; incredibly fast; and the faster your robot can think, the smarter your robot can become.

Here’s what Nick Hobbs, one of Athena’s builders had to say:

The cool thing about this is we’re processing LIDAR scans at 70Hz. That means in 1/70 of a second we’re evaluating 180 data points effects on 16 possible vehicle paths. This is super fast, super parallel processing of a ton of data that couldn’t happen without NI’s FPGA. Oh, and naturally, all programmed in LabVIEW!

It’s making more sense on why they named their robot Athena; she’s one smart robot. I’m looking forward to seeing more from Nick’s team. Check out more videos on their YouTube Channel.

For more info on FPGA-level programming and other multicore solutions, check out this white paper.

Advertisements

LabVIEW Robotics Connects to Microsoft Robotics Studio Simulator January 27, 2010

Posted by emiliekopp in labview robot projects.
Tags: , , , , , , , , , , , ,
7 comments

Several people have pointed out that Microsoft Robotics Developer Studio has some strikingly familiar development tools, when compared to LabVIEW. Case in point: Microsoft’s “visual programming language” and LabVIEW “graphical programming language;” both are based on a “data flow” programming paradigm.

LabVIEW Graphical Programming Language

MSRDS Visual Programming Language

There’s no need for worry, though (at least, this is what I have to keep reminding myself). National Instruments and Microsoft have simply identified a similar need from the robotics industry. With all the hats a roboticist must wear to build a complete robotic system (programmer, mechanical engineer, controls expert, electrical engineer, master solderer etc.), they need to exploit any development tools that allow them to build and debug robots as quickly and easily as possible. So it’s nice to see that we’re all on the same page. 😉

Now, both LabVIEW Robotics and MSRDS are incredibly useful robot development tools, each on its own accord. That’s why I was excited to see that LabVIEW Robotics includes a shipping example that enables users to build their code in LabVIEW and then test a robot’s behavior using the MSRDS simulator. This way, you get the best of both worlds.

Here’s a delicious screenshot of the MSRDS-LabVIEW connectivity example I got to play with:

How it works:

Basically, LabVIEW communicates with the simulated robot in the MSRDS simulation environment as though it were a real robot. As such, it continuously acquires data from the simulated sensors (in this case, a camera, a LIDAR and two bump sensors) and displays it on the front panel. The user can see the simulated robot from a birds-eye view in the Main Camera indicator (large indicator in the middle of the front panel; can you see the tiny red robot?). The user can see what is in front of the robot in the Camera on Robot indicator (top right indicator on the front panel) . And the user can see what the robot sees/interprets as obstacles in the Laser Range Finder indicator (this indicator, right below Camera on Robot,  is particularly useful for debugging).

On the LabVIEW block diagram, the simulated LIDAR data obtained from the MSRDS environment is processed and used to perform some simple obstacle avoidance, using a Vector Field Histogram approach. LabVIEW then sends command signals back to MSRDS to control the robot’s motors, and successfully navigates the robot throughout the simulated environment.

There’s a tutorial on the LabVIEW Robotics Code Exchange that goes into more detail for the example. You can check it out here.

Why is this useful?

LabVIEW users can build and modify their robot control code and test it out in the MSRDS simulator. This way, regardless of whether or not you have hardware for your robot prototype, you can start building and debuging the software. But here’s the kicker: once your hardware is ready, you can take the same exact code you developed for the simulated robot and deploy it to an actual physical robot, within a matter of minutes. LabVIEW takes care of porting the code to embedded processors like ARMs, RT OS targets and FPGAs so you don’t have to. Reusing proof-of-concept code, tested and fined-tuned in the simulated environment, in the physical prototype will save the developers SO MUCH TIME.

Areas of improvement:

As of now, the model used in the LabVIEW example is fixed, meaning, you do not have the ability to change the physical configuration of actuators and sensors on the robot; you can only modify the behavior of the robot. Thus, you have a LIDAR, a camera, two bumper sensors and two wheels, in a differential-drive configuration, to play with. But it’s at least a good start.

In the future, it would be cool to assign your own model (you pick the senors, actuators, physical configuration). Perhaps you could do this from LabVIEW too, instead of having to build one from scratch in C#. LabVIEW already has hundreds of drivers available to interface with robot sensors; you could potentially just pick from the long list and LabVIEW builds the model for you…

Bottom line:

It’s nice to see more development tools out there, like LabVIEW and MSRDS, working together. This allows roboticists to reuse and even share their designs and code. Combining COTS technology and open design platforms is the recipe for the robotics industry to mirror what the PC industry did 30 years ago.

Feedback control at its finest: Innovations from UCSD Coordinated Robotics Lab October 19, 2009

Posted by emiliekopp in industry robot spotlight, labview robot projects.
Tags: , , , , , , , , , ,
add a comment

I found this cool video (below), provided by IEEE Spectrum Online the other day. Josh Romero, it’s narrator, must have experienced the robot revolution at this year’s NIWeek, as much of the video footage is taken from the Day 3 keynote. Here’s the full, extended version of Dr. Bewley’s talk about the work being done at the UCSD Coordinated Robotics Lab.

10943-switchblade

This small treaded robot can climb stairs with ease and balance itself on a point.

Josh brings up a good point in his video: automatic feedback control can be the difference between simple, ordinary robots and incredibly sophisticated dynamic systems. Take Switchblade, for example. The robot performs low-level control on a dedicated, embedded processor (in this case, a 2M gate FPGA on a SingleBoardRIO) to automatically balance itself on a point. There is an additional, real-time processor that performs additional tasks like maneuvering up a flight of stairs. With it being so small and having such a wide spectrum of mobility, it puts search-and-rescue robots like the PackBot to shame. See you at the top of the stairs, PackBot!

Ok, I take that back. Let’s avoid “shaming” PackBot. Please don’t shoot me, PackBot.

Vodpod videos no longer available.

Stay tuned for a closer look at how Switchblade works in a future post.

How to Build a Quad Rotor UAV October 6, 2009

Posted by emiliekopp in code, labview robot projects.
Tags: , , , , , , , , , ,
20 comments

Blog Spotlight: Dr. Ben Black, a Systems Engineer at National Instruments, is documenting his trials and tribulations in his blog as he builds an autonomous unmanned aerial vehicle (UAV), using a SingleBoardRIO (2M gate FPGA+400MHz PowerPC processor), four brushless motors, some serious controls theory and lots of gorilla glue.

I particularly appreciate his attention to the details, stepping through elements of UAV design that are often taken for granted, like choosing reference frames, when you should use PID control, and the genius that is xkcd.

Like most roboticists, throughout the design process, he has to wear many hats. I think Ben put it best:

I think that the true interdisciplinary nature of the problems really makes the field interesting.  A roboticist has to have at minimum a working knowledge of mechanical engineering, electrical engineering, computer science / engineering and controls engineering.  My background is from the world of mechanical engineering (with a little dabbling in bio-mechanics), but I end up building circuits  and writing tons of code.  I’ve had to pick up / stumble through the electrical and computer science knowledge as I go along, and I know just enough to make me dangerous (I probably don’t always practice safe electrons…sometimes I let the magic smoke out of the circuits…and I definitely couldn’t write a bubble sort algorithm to save my life).

My point in this soap-box rant is that in the world of robotics it’s good to have a specialty, but to really put together a working system you also need to be a bit of a generalist.

For anyone even considering building a UAV (or just likes to read about cool robotics projects), I suggest you check it out. He shares his .m-files, LabVIEW code, and more. Thanks Ben.

RAPHaEL: Another incredible robot design from RoMeLa September 29, 2009

Posted by emiliekopp in industry robot spotlight, labview robot projects.
Tags: , , , , , , , , ,
3 comments

A lot of you may have already heard about the second- generation air-powered robotic hand on Engadget from Dr. Hong and his engineering students at RoMeLa. But seeing as how NI and RoMeLa have been long time friends and have worked on many robotics projects together, we’ve got an inside story on how the new and improved RAPHaEL came to be. The following is a recap from Kyle Cothern, one of the undergrads that worked on RAPHaEL 2. He explains how closed-loop control for the mechanical hand was implemented in less than 6 hours, proof that seamless integration between hardware and software can make a big difference in robotic designs.

RAPHaEL (Robotic Air Powered HAnd with Elastic Ligaments) is a robotic hand that uses a novel corrugated tubing actuation method to achieve human like grasping with compliance. It was designed by a team of four undergraduate students working under advisor Dr. Dennis Hong in the Robotics Mechanisms Lab (RoMeLa) at Virginia Tech.  The hand was originally designed to be operated with simple on off switches for each solenoid, providing limited control by a user. The first version was modified to use a simple micro controller to accept switch input and run short routines for demos.

The second version of the hand was designed to include a micro controller to allow for more complicated grasping methods that require closed loop control. These grasping methods included closed loop position and closed loop force control to allow for form grasping and force grasping, the two most commonly used human grasping methods. Each method would require analog input from one or more sensors, analog output to one or more pressure regulators, and digital output to control the solenoids, along with a program to calculate the proper control signal to send to the pressure regulators based on the sensor data.  Using the micro controller from the first version of the hand was considered, however it would have taken about a month for the team to redesign the controller to accept sensor input and analog output for the pressure regulator. It would have then taken weeks to program the controller and calibrate it properly, and a complete redesign would be necessary to add more sensors or actuators.

At this point 3 of the 4 students working on the hand graduated and left the lab. With only one student left it would take a considerably long amount of time to implement a micro controller, and due to the complexity of a custom designed micro controller if that student were to leave the lab, it would take a very long time for a new person to be trained to operate and continue research with the hand. The remaining team member decided to search for an easy to implement, expandable solution to the problem, to allow future research to continue without an extensive learning curve. The stringent requirements for this new controller lead the final team member to consult with a colleague. The colleague recommended an NI CompactDAQ (cDAQ) system for its ease of implementation and expandibility, along with it’s ability to acquire the sensor data, control the solenoids and control the pressure regulator.

Upon receiving the cDAQ, the solenoids were attached, and the control software was written in LabVIEW in about 1 hour. Then the electronic pressure regulator was attached in line with the hand, allowing for proportional control of the pressure to the hand within 1 more hour. At this point a force sensor was attached to the fingertip to make a closed loop system.  The interpretation code for the sensor was written in about 40 minutes, and PID control of the grasping force was functional in a grand total of about 6 hours.

The RoMeLa team plans to upgrade their robotic hand even further by upgrading to a CompactRIO controller. The  CompactRIO would allow control calculations and response to happen at a much faster rate since there is a dedicated FPGA combined with a real-time, embedded PC processor. With a new, beefed up controller, they plan to test other control schemes such as position or vision based control.  They also plan to incorporate additional degrees of freedom (as if there weren’t already enough?!) by adding control of a wrist or arm mechanism.

Dr. Hong also gave a heads up that Discovery Channel will be featuring some the robotic innovations from RoMeLa, so keep an eye out for an update to this blog post with a link to that footage.

Blind Driver Challenge from Virginia Tech: A Semi Autonomous Automobile for the Visually Impaired September 1, 2009

Posted by emiliekopp in industry robot spotlight, labview robot projects.
Tags: , , , , , , , , , ,
5 comments

How 9 mechanical engineering undergrads utilized commercial off-the-shelf technology (COTS) and design HW and  SW donated from National Instruments to create a sophisticated, semi-autonomous vehicle that allows the visually impaired to perform a task that was previously thought impossible: driving a car.

Greg Jannaman (pictured in passenger seat), ME senior and team lead for the Blind Driver Challenge project at VT, was kind enough to offer some technical details on how they made this happen.

How does it work?

One of the keys to success was leveraging COTS technology, when at all possible. This meant, rather than building things from scratch, the team purchased hardware from commercial vendors that would allow them to focus on the important stuff, like how to translate visual information to a blind driver.

example of the information sent back from a LIDAR sensor

example of the information sent back from a LIDAR sensor

So they started with a dune buggy. They tacked on a Hokuyo laser range finder (LIDAR) to the front, which essentially pulses out a laser signal across an area in front of the vehicle and receives information regarding obstacles from  the laser signals that are bounced back. LIDAR is a lot like radar, only it uses light instead of radio waves.

For control of the vehicle, they used a CompactRIO embedded processing platform. They interfaced their sensors and vehicle actuators directly to an onboard FPGA and performed the environmental perception on the real-time 400 Mhz Power PC processor. Processing the feedback from sensors in real-time allowed the team to send immediate feedback to the driver. But the team did not have to learn how to program the FPGA in VHDL nor did they have to program the embedded processor with machine level code. Rather, they performed all programming on one software development platform; LabVIEW. This enabled 9 ME’s to become embedded programmers on the spot.

But how do you send visual feedback to someone who does not see? You use their other senses. Mainly, the sense of touch and the sense of hearing. The driver wears a vest that contains vibrating motors, much like the motors you would find in your PS2 controller (this is called haptic feedback, for anyone interested). The CompactRIO makes the vest vibrate to notify the driver of obstacle proximity and to regulate speed, just like a car racing video game. The driver also wears a set of head phones. By sending a series of clicks to the left and right ear phones, the driver uses the audible feedback to navigate around the detected obstacles.

The team has already hosted numerous blind drivers to test the vehicle. The test runs have been so successful, they’re having ask drivers to refrain from performing donuts in the parking lot. And they already have some incredible plans on how to improve the vehicle even further. Watch the video to find out more about the project and learn about their plans to further incorporate haptic feedback.