In emergency response, gathering intelligence is still largely a manual process despite advances in mobile computing and multi-touch interaction. The labor-intensive nature of this process means that the information digested by personnel going into the field is typically an operational period old. In a day where satellite photography and mobile connectivity is becoming ubiquitous in our digital lives, it is alarming to find this is the state of the practice for most disciplines of emergency response. Recent advances in robotics, mobile communication, and multi-touch tabletop displays are bridging this technological gap and providing enhanced network centric operation and increased mission effectiveness. Our work focuses on closing the gap between the personnel in the field and the command hierarchy supporting those teams. Our research in human-computer interaction leverages these technologies for robot control through a collaborative tabletop multi-touch display. A single-robot operator control unit and a multi-robot command and control interface has been created. Users command individual or multiple robots through a gesture set designed to maximize ease of learning. Users can pan and zoom on any area, and the interface can integrate video feeds from individual robots so the users can see things from the robot's perspective. Manual robot control is achieved by using the DREAM (Dynamically Resizing Ergonomic and Multi-touch) Controller. The controller is painted on the screen beneath the user's hands, changing its size and orientation according to our newly designed algorithm for fast hand detection, finger registration, and handedness registration. In addition to robot control, the DREAM Controller and hand detection algorithms have a wide number of applications in general human-computer interaction such as keyboard emulation and multi-touch user interface design.