This project is based on the Nvidia open source project "jetson-reinforcement" developed by Dustin Franklin. The goal of the project is to create a DQN agent and define reward functions to teach a robotic arm to carry out two primary objectives:
Have any part of the robot arm touch the object of interest, with at least a 90% accuracy.
Have only the gripper base of the robot arm touch the object, with at least a 80% accuracy.
Building from Source (Nvidia Jetson TX2)
Run the following commands from terminal to build the project from source:
$ sudo apt-get install cmake
$ git clone http://github.com/udacity/RoboND-DeepRL-Project
$ cd RoboND-DeepRL-Project
$ git submodule update --init
$ mkdir build
$ cd build
$ cmake ../
$ make
During the cmake step, Torch will be installed so it can take awhile. It will download packages and ask you for your sudo password during the install.
Testing the API
To make sure that the reinforcement learners are still functioning properly from C++, a simple example of using the API called catch is provided. Similar in concept to pong, a ball drops from the top of the screen which the agent must catch before the ball reaches the bottom of the screen, by moving it's paddle left or right.
To test the textual catch sample, run the following executable from the terminal:
Internally, catch is using the dqnAgent API from our C++ library to implement the learning.
Project Environment
To get started with the project environment, run the following:
$ cd RoboND-DeepRL-Project/build/aarch64/bin
$ chmod +x gazebo-arm.sh
$ ./gazebo-arm.sh
The plugins which hook the learning into the simulation are located in the gazebo/ directory of the repo. The RL agent and the reward functions are to be defined in ArmPlugin.cpp.