We present a real-time system including a 3D character that can converse, capture, analyze and interpret subtle and multidimensional human nonverbal behaviors for possible applications such as job interviews, public speaking, or even automated speech therapy. The system works in a personal computer and senses nonverbal data from video (i.e., facial expressions) and audio (i.e., speech recognition and prosody analysis) using a standard webcam. We contextualized the development and evaluation of our system as a training scenario for job interviews. Using user-centered design and iterations, we determine how the nonverbal data could be presented to the user in an intuitive and educational manner. We tested efficacy of the system in the context of job interviews with 90 MIT undergraduate students. Our results suggest that the participants who used our system to improve their interview skills were perceived to be better candidates by human judges. Participants reported that the most useful feature was being given feedback on their speaking rate, and overall they reported strong agreement that they would consider using this system again for self-reflection.