Gait analysis is an important tool for monitoring and preventing injuries as well as to quantify functional decline in neurological diseases and elderly people. In most cases, it is more meaningful to monitor patients in natural living environments with low-end equipment such as cameras and wearable sensors. However, inertial sensors cannot provide enough details on angular dynamics. This paper presents a method that uses a single RGB camera to track the 2D joint coordinates with state-of-the-art vision algorithms. Reconstruction of the 3D trajectories uses sparse representation of an active shape model. Subsequently, we extract gait features and validate our results in comparison with a state-of-the-art commercial multi-camera tracking system. Our results are comparable to those from the current literature based on depth camera and optical markers to extract gait characteristics.