jump to navigation

A quick update to the Blind Driver Challenge vehicle October 14, 2009

Posted by emiliekopp in labview robot projects.
Tags: , , , , , ,
add a comment

Here’s a technical case study that provides some additional information on how the Virginia Tech team built the blind driver vehicle I mentioned in a previous blog post.

In addition, I wanted to share this video I found of 16-year-old Addie Hagen getting prepped to take a first stab at driving a car. It’s short and simple, with little video editing. But watching it truly demonstrates the impact this project can have on a person who thought something like driving a car would never be possible in her lifetime.

Advertisements

RAPHaEL: Another incredible robot design from RoMeLa September 29, 2009

Posted by emiliekopp in industry robot spotlight, labview robot projects.
Tags: , , , , , , , , ,
3 comments

A lot of you may have already heard about the second- generation air-powered robotic hand on Engadget from Dr. Hong and his engineering students at RoMeLa. But seeing as how NI and RoMeLa have been long time friends and have worked on many robotics projects together, we’ve got an inside story on how the new and improved RAPHaEL came to be. The following is a recap from Kyle Cothern, one of the undergrads that worked on RAPHaEL 2. He explains how closed-loop control for the mechanical hand was implemented in less than 6 hours, proof that seamless integration between hardware and software can make a big difference in robotic designs.

RAPHaEL (Robotic Air Powered HAnd with Elastic Ligaments) is a robotic hand that uses a novel corrugated tubing actuation method to achieve human like grasping with compliance. It was designed by a team of four undergraduate students working under advisor Dr. Dennis Hong in the Robotics Mechanisms Lab (RoMeLa) at Virginia Tech.  The hand was originally designed to be operated with simple on off switches for each solenoid, providing limited control by a user. The first version was modified to use a simple micro controller to accept switch input and run short routines for demos.

The second version of the hand was designed to include a micro controller to allow for more complicated grasping methods that require closed loop control. These grasping methods included closed loop position and closed loop force control to allow for form grasping and force grasping, the two most commonly used human grasping methods. Each method would require analog input from one or more sensors, analog output to one or more pressure regulators, and digital output to control the solenoids, along with a program to calculate the proper control signal to send to the pressure regulators based on the sensor data.  Using the micro controller from the first version of the hand was considered, however it would have taken about a month for the team to redesign the controller to accept sensor input and analog output for the pressure regulator. It would have then taken weeks to program the controller and calibrate it properly, and a complete redesign would be necessary to add more sensors or actuators.

At this point 3 of the 4 students working on the hand graduated and left the lab. With only one student left it would take a considerably long amount of time to implement a micro controller, and due to the complexity of a custom designed micro controller if that student were to leave the lab, it would take a very long time for a new person to be trained to operate and continue research with the hand. The remaining team member decided to search for an easy to implement, expandable solution to the problem, to allow future research to continue without an extensive learning curve. The stringent requirements for this new controller lead the final team member to consult with a colleague. The colleague recommended an NI CompactDAQ (cDAQ) system for its ease of implementation and expandibility, along with it’s ability to acquire the sensor data, control the solenoids and control the pressure regulator.

Upon receiving the cDAQ, the solenoids were attached, and the control software was written in LabVIEW in about 1 hour. Then the electronic pressure regulator was attached in line with the hand, allowing for proportional control of the pressure to the hand within 1 more hour. At this point a force sensor was attached to the fingertip to make a closed loop system.  The interpretation code for the sensor was written in about 40 minutes, and PID control of the grasping force was functional in a grand total of about 6 hours.

The RoMeLa team plans to upgrade their robotic hand even further by upgrading to a CompactRIO controller. The  CompactRIO would allow control calculations and response to happen at a much faster rate since there is a dedicated FPGA combined with a real-time, embedded PC processor. With a new, beefed up controller, they plan to test other control schemes such as position or vision based control.  They also plan to incorporate additional degrees of freedom (as if there weren’t already enough?!) by adding control of a wrist or arm mechanism.

Dr. Hong also gave a heads up that Discovery Channel will be featuring some the robotic innovations from RoMeLa, so keep an eye out for an update to this blog post with a link to that footage.

ARCH: A humvee that drives itself September 4, 2009

Posted by emiliekopp in industry robot spotlight.
Tags: , , , , , , , , , ,
1 comment so far

Speaking of autonomous vehicles, here’s an autonomous humvee from TORC Technologies, a company founded by a couple of Virginia Tech graduates several years ago. Needless to say, VT is a hot bed of UGV experts.

The idea is that you have a an unmanned humvee at the lead of a convoy. That way, if any IEDs and/or land mines are encountered on the convoy’s path, the unmanned vehicle is targeted first. The leading humvee can be teleoperated (remotely controlled) by an operator in the chase vehicle, or it can operate semi-autonmously or autonomously on its own.

TORC has become an industry expert in unmanned and autonomous vehicle systems. They helped create VictorTango, the vehicle that won 3rd prize in the DARPA Urban Challenge. NI has had the pleasure of working with them quite a bit. They’ve done a lot of their development using LabVIEW and CompactRIO.

Something I thought was cool is that TORC can turn any commericial automobile into an autonomous/teleoperated vehicle in a matter of hours. They do this by interfacing to the vehicle control unit through Controller Area Network (CAN) bus, and then communicate to it using JAUS commands. JAUS is like a universal language in military robotics. You can control one robot with specific JAUS commands and then control a different robot with the same commands, assuming they have the same computing nodes.

As if this blog post doesn’t already have enough acronyms, here’s the name of the autonomous humvee that TORC delivered to the U.S. Army Tank Automotive Research, Development and Engineering Center (TARDEC):

ARCH: Autonomous Remote Control HMMWVs or Autonomous Remote Control for High Mobility Multipurpose Wheeled Vehicles (an acronym that contains yet another long acronym; engineers love this stuff)

Blind Driver Challenge from Virginia Tech: A Semi Autonomous Automobile for the Visually Impaired September 1, 2009

Posted by emiliekopp in industry robot spotlight, labview robot projects.
Tags: , , , , , , , , , ,
5 comments

How 9 mechanical engineering undergrads utilized commercial off-the-shelf technology (COTS) and design HW and  SW donated from National Instruments to create a sophisticated, semi-autonomous vehicle that allows the visually impaired to perform a task that was previously thought impossible: driving a car.

Greg Jannaman (pictured in passenger seat), ME senior and team lead for the Blind Driver Challenge project at VT, was kind enough to offer some technical details on how they made this happen.

How does it work?

One of the keys to success was leveraging COTS technology, when at all possible. This meant, rather than building things from scratch, the team purchased hardware from commercial vendors that would allow them to focus on the important stuff, like how to translate visual information to a blind driver.

example of the information sent back from a LIDAR sensor

example of the information sent back from a LIDAR sensor

So they started with a dune buggy. They tacked on a Hokuyo laser range finder (LIDAR) to the front, which essentially pulses out a laser signal across an area in front of the vehicle and receives information regarding obstacles from  the laser signals that are bounced back. LIDAR is a lot like radar, only it uses light instead of radio waves.

For control of the vehicle, they used a CompactRIO embedded processing platform. They interfaced their sensors and vehicle actuators directly to an onboard FPGA and performed the environmental perception on the real-time 400 Mhz Power PC processor. Processing the feedback from sensors in real-time allowed the team to send immediate feedback to the driver. But the team did not have to learn how to program the FPGA in VHDL nor did they have to program the embedded processor with machine level code. Rather, they performed all programming on one software development platform; LabVIEW. This enabled 9 ME’s to become embedded programmers on the spot.

But how do you send visual feedback to someone who does not see? You use their other senses. Mainly, the sense of touch and the sense of hearing. The driver wears a vest that contains vibrating motors, much like the motors you would find in your PS2 controller (this is called haptic feedback, for anyone interested). The CompactRIO makes the vest vibrate to notify the driver of obstacle proximity and to regulate speed, just like a car racing video game. The driver also wears a set of head phones. By sending a series of clicks to the left and right ear phones, the driver uses the audible feedback to navigate around the detected obstacles.

The team has already hosted numerous blind drivers to test the vehicle. The test runs have been so successful, they’re having ask drivers to refrain from performing donuts in the parking lot. And they already have some incredible plans on how to improve the vehicle even further. Watch the video to find out more about the project and learn about their plans to further incorporate haptic feedback.

Phantom of the Robopera December 19, 2008

Posted by emiliekopp in robot fun.
Tags: , , , , , ,
add a comment

Awhile back, I learned about the 3 D’s of Robotics from Dr. Al Wicks, Director of Virginia Tech’s Modal Analysis Laboratory (MAL) and golf-dynamics-extrodinaire. He said that a robot’s purpose was to do things that were either Dull, Dirty or Dangerous to humans. Meaning, if it’s boring, gross or puts a human’s life on the line, that’s typically when a robot is designed to take care of business.

This makes sense.

So I scratched my head when I read that Taiwan University has cast robots as leads in their production of The Phantom of the Opera. “The lead bots (named Thomas and Janet) can both walk, and have silicon facial “muscles” that help them mimic human expressions and mouth movements.” Whoa.

I’m struggling to think which one of the 3 D’s these robots should fall under.

2008_12_12_phantom.jpg
Photo from switched.com

Did I mention that NI’s cofounder, Technical Fellow, and Father of LabVIEW, Jeff Kodosky, loves opera? I wonder if he might be interested in seeing this production…