In a GPS-denied environment, one of the possible selections for navigating an unmanned ground vehicle (UGV) is through real-time visual odometry. To navigate in such an environment, the UGV needs to be able to detect, identify, and relate the static and dynamic objects such as trucks, motorbikes, and pedestrians in the on-board camera field of view. Therefore, object recognition becomes crucial in navigating UGVs. However, object recognition is known to be one of the challenges in the field of computer vision. Current analytic video software inadequately utilizes heuristics like size, shape, and direction to determine whether a detected object is a human, a vehicle, or an animal. This thesis explores another approach, the deep-learning technique, which makes use of neural networks based on vast collections of training data images. This thesis follows a systems engineering approach in analyzing the need and suggesting a solution. It shows how to create and train the aforementioned networks using just three objects: a chair, a table, and a car. A Pioneer UGV equipped with the corresponding sensors is then used to test the developed algorithms. The preliminary analysis conducted in this thesis shows good potential for using the deep-learning technique on future UGVs.