In order: partially closing each hand separately; running a command file that opens one hand and closes the other; running a movement file that produces a wave
For my final project, I chose to work on developing an API for controlling a humanoid robot simulated in Gazebo. I chose this because of several discussions in-class concerning the benefits of high-level robot control (without the need of directly manipulating code) and the current lack of such a controller for the DRC simulated robot. Additionally, I was not previously familiar with APIs and wanted to take the opportunity to learn about and apply them to robotics.
In researching the topic, I learned that an API (application protocol interface) is an interface that allows control of a system in a way that is relatively intuitive and without the need to interact directly with the code that comprises the system. Examples of APIs include graphics libraries for programming languages (drawCanvas(inputs), drawLine(inputs), etc.), Earth API for Google Maps (which allows for the visualization of a userís geospatial data within Google Earth), and visual programming languages (typically in which a user writes programs by connecting blocks of already-made code). Seeing as how all of these examples enable a user to perform a task via a program without needing to be familiar with the programís code itself, an API could be beneficial for coding complex tasks and behaviors into simulated and physical robots, including for the DRC challenge.
My approach was to create a system whereby fundamental units of movement (primitives) could be combined to create slightly more complex movements, which in turn could be treated as single units and combined to produce even more complex movements. This hierarchical, modular structure of control could continue until the robot eventually becomes capable of achieving any task the user desires.
To begin, I looked at tutorials for controlling virtual robots in the Gazebo simulator, and based my project on a demo that used rospy to send commands to Gazebo and controlled the robot via JointTrajectory commands. I then moved the portion of code responsible for generating the tutorialís movement to a separate file. All that remained was code that would apply to any form of motion, not just this particular tutorial. Next, I wrote a text-based interface that would allow me to open and run the tutorial-specific code so that the simulated robot would behave accordingly. This setup also meant that I could write my own commands in another file and, once I called it, the robot would move appropriately. This became the basis of the API.
Next, I enhanced the text interface. I wanted a user to be able to specify details about the movement via the API instead of needing to modify pre-existing code. Consequently, I wrote the interface such that a single line of instructions would indicate which movement file to use and what the parameters of the movement were (see Results for examples).
To enable the development of complex, hierarchically-structured movements, I added file reading capability to the API. This means that a user could save a string of API commands within a text file which the API could then read and execute. Thus, instead of issuing many, single commands to the API manually, the instructions of a complex motion could be condensed into running a single file.
I then focused on making the API remember the state of the robot. Once the robot has changed its position, the API would retain this information such that future movements would be based off of the current robot position, rather than reset the robot to its default position before each new command.
By writing only one primitive function (which I named move.py), I was able to move any joint in the DRC simulated robot via the API. Instead of modifying the code every time I wanted to move a different joint, I entered different specifications via the interface. For example, if I wanted to lower the right arm by 45 degrees in 500 milliseconds, I would type:
move r_arm_shx 45 500
If instead I wanted to raise the left arm by 20 degrees in 300 milliseconds, I would instead type:
move l_arm_shx 20 300
To test the flexibility of the API in handling different movement functions and quantities of arguments, I also wrote a primitive that performs an oscillating motion. To make the right shoulder move through 3/4 of an oscillation in 700 milliseconds with a peak angle of 30 degrees, I would enter into the API:
oscillate r_arm_shx 30 0.75 700
The simulated shoulder moved in a corresponding manner, thus demonstrating the generalized, adaptive nature of the API
Although the API was intended for JointTrajectory-based commands, I generalized it such that even non-JointTrajectory commands could be executed. Consequently, I was able to write a script that allowed me to control the Sandia Hands, which have a controller of their own. If I wanted to spherically close the left hand halfway, I would type:
sandia l s 0.5
Additionally, I could combine some of these commands into text files and run them together. For example, I could move both hands by saving their respective commands in a file called movehands.txt and typing into the API:
Doing so allows for the generation of complex movements such that, once a movement is established, the commands comprising it do not need to be revisited or entered manually, and the instructions for that complex movement can be condensed into a single API command. Furthermore, a user could theoretically embed runfile commands within text files, resulting in movement commands with multi-tiered, hierarchical structures.
The API, though developed in the abovementioned ways, would need some additional work to become fully functional. One primary area of work involves executing runfile commands within text files (which would allow for hierarchical structures of unlimited depth). Another would be better synchronization between the API and the simulator (for example, executing multiple JointTrajectory commands from a text file causes JointTrajectory commands to be generated properly, but when executed together they can instantly move a joint directly to its final location).
I learned a lot from this project and from the class in general. Namely, I learned about the Gazebo simulator and how to utilize it via rospy. I also learned the benefits of controlling a robot in ways that do not require extensive knowledge of the underlying code, which enables programmers and non-programmers, familiar or unfamiliar, to take advantage of the system. I researched APIs, how they make this possible, and discovered through investigation and by writing my own API the traits that make an API practical and beneficial.