HAPTIX: Simulation of prosthetic devices

Fundamentally, robotics is about helping people. Robots help us manufacture things, help us build things, and help make our lives easier and more convenient. As robotic systems increase in sophistication and capability, they’re starting to help people more directly, in elder care, rehabilitation centers, and hospitals. In the near future, robotics will become even more tightly integrated with humanity, to the point where cybernetics will be able to restore function to people with disabilities. In particular, amputee military personnel are the focus of one such program.

In 2014, DARPA announced its Hand Proprioception and Touch Interfaces (HAPTIX) program, which “seeks to create a prosthetic hand system that moves and provides sensation like a natural hand.”

According to Doug Weber, DARPA program manager of HAPTIX: “We believe that HAPTIX will create a sensory experience so rich and vibrant that the user will want to wear his or her prosthesis full-time and accept it as a natural extension of the body. If we can achieve that, DARPA is even closer to fulfilling its commitment to help restore full and natural functionality to wounded service members.”

Three different teams are involved in the HAPTIX project, and its success will depend on a carefully optimized mix of hardware, user interfaces, and control algorithms. OSRF is proud to be providing a customized version of the Gazebo simulator to the HAPTIX teams, allowing them to run tests on their software without being constrained by hardware availability: essentially, a kind of virtual playground for software engineers.

“The goal of HAPTIX is for OSRF to provide a realistic prosthetic simulation environment for biomechanical engineers to develop controllers for advanced prosthetics with high degrees of freedom,” explains John Hsu, co-founder and Chief Scientist at OSRF. The advanced prosthesis that DARPA is using in the HAPTIX program is DEKA’s “Luke” robotic arm, a 14 DoF cybernetic total arm replacement system. However, the arm is currently controlled by simple user interfaces designed for testing, and part of what HAPTIX hopes to deliver are interfaces that utilize control signals from muscles and nerves, while simultaneously delivering sensory feedback.

After nearly ten years of work and $40 million from DARPA, DEKA’s robotic arm is an amazing piece of hardware, but that’s just the beginning. “The hardware, in my opinion, needs to come before the software,” says Hsu. “They can be designed at the same time, but the hardware has a longer iteration cycle. Once you develop a nice hardware platform that’s stable, then you give it to the software team, and they take off, working really fast on the software while in the meantime trying not to break the hardware.”

This illustrates two reasons why having a good simulation environment is important: first, it lets you start working on the software before the hardware is fully complete, and second, it to some extent insulates software development from the hardware itself, meaning that you can have lots of engineers developing software in parallel, even if you only have one piece of hardware that may be fragile, expensive, and quite often inoperable for one reason or another.

For OSRF, creating and supporting a version of Gazebo for the HAPTIX program involves many different areas. Besides the customized simulation environment, OSRF has also provided teams with an OptiTrack motion capture device, NVIDIA stereo glasses and a 3D monitor, a 3D joystick, and the documentation required to get it all working together flawlessly. This custom version of Gazebo also includes support for a variety of teleoperation hardware, and for the first time, users can interact programmatically with the simulation using both Windows and MATLAB. HAPTIX developers can leverage these 3D sensors and teleoperation systems to translate the motions of physical arms and hands into virtual environments, allowing them to run common hand function tests in the real world and in simulation at the same time. This also lays the foundation for a framework that could provide amputees a powerful and affordable way to learn how to use their new prosthesis.

Once the HAPTIX teams receive their DEKA arms, OSRF’s job becomes even more important, according to Hsu, because they’ll get a chance to see how well the simulation is actually working and then refine it to bring it as close to reality as possible. “I’m really looking forward to the validation part,” Hsu says. “I think that’s one of the big missing pieces for many simulation platforms: good validation data. When we were working on Gazebo for the DARPA Robotics Challenge, we never had an ATLAS robot. Getting the DEKA hand to do validation is huge.”

Validation is the process of making sure that commands sent to the simulated DEKA arm result in the same movements as identical commands sent to the real DEKA arm. “We send commands to the real hand and the simulated hand to see if they behave differently,” explains Hsu. “If they do, we update our model to make them match.” The closer the simulation matches, the more useful it will be to the HAPTIX teams. The end goal is, of course, to get everything working on the real hardware, but an accurate and detailed simulator is critical to the development of effective software.

The first generation of the DEKA arm recently arrived at OSRF for validation testing, and the complete hardware is expected before the end of the year. OSRF has been steadily releasing a series of stable versions of the HAPTIX simulator, and as the fidelity of simulated position holding, force control and response, and other dynamics are verified on the arm over the next few months, OSRF will continue upgrading the simulation software to make sure that the HAPTIX teams have all of the tools that they need to progress as quickly and efficiently as possible.

By early 2017, Phase 1 of HAPTIX will be complete, and the software and hardware components that prove to be the most successful will continue into Phase 2, the end goal of which is a complete, functional HAPTIX system. DARPA is hoping that take-home trials of such a system will happen by 2019, and that soon after, any amputee who needs one will be able to benefit from a prosthetic hand that acts (and feels) just like the real thing.

ROS at the Intel Developer Forum

Next week is the Intel Developer Forum in San Francisco.

If you know anything about ROS and robots, then by now you know about the integration between ROS and Intel’s RealSense Camera.

Given this relationship, you can expect to see and hear a lot about ROS next week at IDF.

We encourage you to check out these sessions next week in San Francisco:

Michael Ferguson (Fetch Robotics): Accelerating Your Robotics Startup with ROS

Michael Ferguson spent a year as a software engineer at Willow Garage, helping rewrite the ROS calibration system, among other projects. In 2013, he co-founded Unbounded Robotics, and is currently the CTO of Fetch Robotics. At Fetch, Michael is one of the primary people responsible for making sure that Fetch’s robots reliably fetch things. Mike’s ROSCon talk is about how to effectively use ROS as an integral part of your robotics business, including best practices, potential issues to avoid, and how you should handle open source and intellectual property.

Because of how ROS works, much of your software development (commercial or otherwise) is dependent on many external packages. These packages are constantly being changed for the better — and sometimes for the worse — at unpredictable intervals that are completely out of your control. Using continuous integration, consisting of systems that can handle automated builds, testing, and deployment, can help you catch new problems as early as possible. Michael also shares that a useful way to avoid new problems is to not immediately switch over to new software as soon as they are available: instead, stick with long-term support releases, such as Ubuntu 14.04 and ROS Indigo.

While the foundation of ROS is built on open source, using ROS doesn’t mean that all of the software magic that you create for your robotics company has to be given away for free. ROS supports many different kinds of licenses, some of which your lawyers will be more happy with than others, but there are enough options with enough flexibility that it doesn’t have to be an issue. Using Fetch Robotics as an example, Mike discusses what components of ROS his company uses in their commercial products, including ROS Navigation and MoveIt. With these established packages as a base, Fetch was able to quickly put together operational demos, and then iterate on an operating platform by developing custom plugins optimized for their specific use cases.

When considering how to use ROS as part of your company, it’s important to look closely at the packages you decide to incorporate, to make sure that they have a friendly license, good documentation, recent updates, built-in tests, and a standardized interface. Keeping track of all of this will make your startup life easier in the long run. As long as you’re careful, relying on ROS can make your company more agile, more productive, and ready to make a whole bunch of money off of the future of robotics.
~~~~~~~~~~~~~~~~~~~~

Next up: Ryan Gariepy (Clearpath Robotics)

ROSCon 2016: Proposal deadline July 8th and venue information

With just over 3 months to go before ROSCon 2016, we have some important announcements:

* The deadline for submitting presentation proposals is July 8, 2016. If you want to present your work at ROSCon this year, make sure to submit your proposal before the deadline: http://roscon.ros.org/2016/#call-for-proposals.
* The conference will be held at the Conrad Seoul. Hotel rooms at the discounted conference rate are limited! Reserve your room today. http://roscon.ros.org/2016/#location. Also listed are some options for child care during the conference, which we hope will be helpful for attendees traveling with families.
* Registration will open in a couple of weeks: http://roscon.ros.org/2016/#important-dates.

We can’t put on ROSCon without the support of our generous sponsors, who now include Clearpath Robotics, Southwest Research Institute, GaiTech, and ARM!
http://roscon.ros.org/2016/#sponsors

We’d like to especially thank our Platinum and Gold Sponsors: Fetch Robotics, Clearpath Robotics, Intel, Southwest Research Institute, and Yujin Robot.

Moritz Tenorth (Magazino): Maru and Toru — Item-Specific Logistics Solutions Based on ROS

It’s not sexy, but the next big thing for robots is starting to look like warehouse logistics. The potential market is huge, and a number of startups are developing mobile platforms to automate dull and tedious order fulfillment tasks. Transporting products is just one problem worth solving: picking those products off of shelves is another. Magazino is a German startup that’s developing a robot called Toru that can grasp individual objects off of warehouse shelves, a particularly tricky task that Magazino is tackling with ROS.

Moritz Tenorth is Head of Software Development at Magazino. In his ROSCon talk, Moritz describes Magazino’s Toru as “a mobile pick and place robot that works together with humans in a shared environment,” which is exactly what you’d want in an e-commerce warehouse. The reason that picking is a hard problem, as Moritz explains, is perception coupled with dynamic environments and high uncertainty: if you want a robot that can pick a wide range of objects, it needs to be able to flexibly understand and react to its environment; something that robots are notoriously bad at. ROS is particularly well suited to this, since it’s easy to intelligently integrate as much sensing as you need into your platform.

Magazino’s experience building and deploying their robots has given them a unique perspective on warehouse commercialization with ROS. For example, databases and persistent storage are crucial (as opposed to a focus on runtime), and real-time control turns out to be less important than being able to quickly and easily develop planning algorithms and reducing system complexity. Software components in the ROS ecosystem can vary wildly in quality and upkeep, although ROS-Industrial is working hard to develop code quality metrics. Magazino is also working on remote support and analysis tools, and trying to determine how much communication is required in a multi-robot system, which native ROS isn’t very good at.

Even with those (few) constructive criticisms in mind, Magazino says that ROS is a fantastic way to quickly iterate on both software and hardware in parallel, especially when combined with 3D printed prototypes for testing. Most importantly, Magazino feels comfortable with ROS: it has a familiar workflow, versatile build system, flexible development architecture, robust community that makes hiring a cinch, and it’s still (somehow) easy to use.

Next up: Michael Ferguson (Fetch Robotics)

Tom Moore: Working with the Robot Localization Package

Clearpath Robotics is best known for building yellow and black robots that are the research platforms you’d build for yourself; that is, if it wasn’t much easier to just get them from Clearpath Robotics. All of their robots run ROS, and Clearpath has been heavily involved in the ROS community for years. Now with Locus Robotics, Tom Moore spent seven months as an autonomy developer at Clearpath. He is the author and maintainer of the robot_localization ROS package, and gave a presentation about it at ROSCon 2015.

robot_localization is a general purpose state estimation package that’s used to give you (and your robot) an accurate sense of where it is and what it’s doing, based on input from as many sensors as you want. The more sensors that you’re able to use for a state estimate, the better that estimate is going to be, especially if you’re dealing with real-worldish things like unreliable GPS or hardware that flakes out on you from time to time. robot_localization has been specifically designed to be able to handle cases like these, in an easy to use and highly customizable way. It has state estimation in 3D space, gives you per-sensor message control, allows for an unlimited number of sensors (just in case you have 42 IMUs and nothing better to do), and more.

Tom’s ROSCon talk takes us through some typical use cases for robot_localization, describes where the package fits in with the ROS navigation stack, explains how to prepare your sensor data, and how to configure estimation nodes for localization. The talk ends with a live(ish) demo, followed by a quick tutorial on how to convert data from your GPS into your robot’s world frame.

The robot_localization package is up to date and very well documented, and you can learn more about it on the ROS Wiki.

Next up: Moritz Tenorth, Ulrich Klank, & Nikolas Engelhard (Magazino GmbH)

Matt Vollrath and Wojciech Ziniew (End Point): ROS-Driven User Applications in Idempotent Environments

Matt Vollrath and Wojciech Ziniew work at an ecommerce consultancy called End Point, where they provide support for Liquid Galaxy; a product that’s almost as cool as it sounds. Originally an open source project begun by Google engineers on their twenty percent time, Liquid Galaxy is a data visualization system consisting of a collection of large vertical displays that wrap around you horizontally. The displays show an immersive (up to 270°) image that’s ideal for data presentations, virtual tours, Google Earth, or anywhere you want a visually engaging environment. Think events, trade shows, offices, museums, galleries, and the like.

Last year, End Point decided to take all of the ad hoc services and protocols that they’d been using to support Liquid Galaxy and move everything over to ROS. The primary reason to do this was ROS support for input devices: you can use just about anything to control a Liquid Galaxy display system, from basic touchscreens to Space Navigator 3D mice to Leap Motions to depth cameras. The modularity of ROS is inherently friendly to all kinds of different hardware.

Check out this week’s ROSCon15 video as Matt and Wojciech take a deep dive into their efforts in bringing ROS to bear for these unique environments.

Next up: Tom Moore (Clearpath Robotics)

Jerry Towler (SwRI): Mapviz – An Extensible 2D Visualization Tool for Automated Vehicles

ROS already comes with a fantastic built-in visualization tool called rviz, so why would you want to use anything else? At Southwest Research Institute, Jerry Towler explains how they’ve created a new visualization tool called Mapviz that’s specifically designed for the kind of large-scale outdoor environments necessary for autonomous vehicle development. Specifically, Mapviz is able to integrate all of the sensor data that you need on top of a variety of two-dimensional maps, such as road maps or satellite imagery.

As an autonomous vehicle visualization tool, Mapviz works just like you’d expect that it would, which Jerry demonstrated with several demos at ROSCon. Mapviz shows you a top-down view of where your vehicle is, and tracks it across a basemap that seamlessly pulls image tiles at multiple resolutions from a wide variety of local or networked map servers, including Open MapQuest and Bing Maps. Mapviz is, of course, very plugin-friendly. You can add things like stereo disparity feeds, GPS fixes, odometry, grids, pathing data, image overlays, projected laser scans, markers (including textured markers) from most sensor types, and more. It can’t really handle three dimensional data (although it’ll do two-and-a-half dimensions via color gradients), but for interactive tracking of your vehicle’s navigation and path planning behavior, Mapviz should offer most of what you need.

For a variety of non-technical reasons, SwRI hasn’t been able to release all of its tools and plugins as open source quite yet, but they’re working on getting approval as fast as they can. They’re also in the process of developing even more enhancements for Mapviz, and you can keep up to date with the latest version of the software on GitHub.

Next up: Matt Vollrath & Wojciech Ziniewicz (End Point)

Michael Aeberhard (BMW): Automated Driving with ROS at BMW

BMW has been working on automated driving for the last decade, steadily implementing more advanced features ranging from emergency stop assistance and autonomous highway driving to fully automated valet parking and 360° collision avoidance. Several of these projects were presented at the 2015 Consumer Electronics Show, and as it turns out, the cars were running ROS for both environment detection and planning.

BMW, being BMW, has no problem getting new research hardware. Their latest development platform is the 335I G. This model comes with an advanced driver assistance system based around cameras and radar. The car has been outfitted with four low-profile laser scanners and one long-range radar, but otherwise, it’s pretty close (in terms of hardware) to what’s available in production BMWs.

Why did BMW choose to move from their internally developed software architecture to ROS? Michael explains how ROS’ reputation in the robotics research community prompted his team to give it a try, and they were impressed with its open source nature, distributed architecture, existing selection of software packages, as well as its helpful community. “A large user base means stability and reliability,” Michael says, “because somebody else probably already solved the problem you’re having.” Additionally, using ROS rather than a commercial software platform makes it much easier for BMW to cooperate with universities and research institutions.

Michael discusses the ROS software architecture that BMW is using to do its autonomous car development, and shows how the software interprets the sensor data to identify obstacles and lane markings and do localization and trajectory planning to enable full highway autonomy, based on a combination of lane keeping and dynamic cruise control. BMW also created their own suite of RQT and rviz plugins specifically designed for autonomous vehicle development.

After about two years of experience with ROS, BMW likes a lot of things about it, but Michael and his team do have some constructive criticisms: message transport needs more work (although ROS 2 should help with this), managing configurations for different robots is problematic, and it’s difficult to enforce compliance with industry standards like ISO and AUTOSAR, which will be necessary for software that’s usable in production vehicles.

Next up: Jerry Towler & Marc Alban (SwRI)

Amit Moran (Intel): Introducing ROS-RealSense: 3D Empowered Robotics Innovation Platform

While Intel is best known for making computer processors, the company is also interested in how people interact with all of the computing devices that have Intel inside. In other words, Intel makes brains, but they need senses to enable those brains to understand the world around them. Intel has developed two very small and very cheap 3D cameras (one long range and one short range) called RealSense, with the initial intent of putting them into devices like laptops and tablets for applications such as facial recognition and gesture tracking.

Robots are also in dire need of capable and affordable 3D sensors for navigation and object recognition, and fortunately, Intel understands this, and they’ve created the RealSense Robotics Innovation Program to help drive innovation using their hardware. Intel itself isn’t a robotics company, but as Amit explains in his ROSCon talk, they want to be a part of the robotics future, which is why they prioritized ROS integration for their RealSense cameras.

A RealSense ROS package has been available since 2015, and Intel has been listening to feedback from roboticists and steadily adding more features. The package provides access to the RealSense camera data (RGB, depth, IR, and point cloud), and will eventually include basic computer vision functions (including plane analysis and blob detection) as well as more advanced functions like skeleton tracking, object recognition, and localization and mapping tools.

Intel RealSense 3D camera developer kits are available now, and you can order one for as little as $99.

Next up: Michael Aeberhard, Thomas Kühbeck, Bernhard Seidl, et al. (BMW Group Research and Technology)
Check out last week’s post: The Descartes Planning Library for Semi-Constrained Cartesian Trajectories