
Vision-driven Robotic Object Manipulation
With deep learning and reinforcement learning, we allow the robot to learn itself. Conducting multiple object manipulation task (e.g., pick-and-place, assemble) using vision sensor can reduce the cost of hardware and integration. The training data is collected in simulation and the algorithm is transferred to real world automatically, so as to reduce the deployment cost. The flexibility of this technique let the robot to handle customized products in intelligent manufacturing factories and logistics sorting for potential multiple categories.
[Video 1: grasping multiple objects]
[Video 2: pushing an object]
[Video 3: flipping an object]
[Video 4: manipulating objects in VR environment]

Autonomous cars guided by four-way infrared sensors
This demo shows a navigation system for autonomous cars based on guided signals given by 4 infrared sensors installed in the bottom part of the car. The orientation of the car is estimated by exploiting the 4-way signals, which guides the adjustment of the motor speeds.

Collision avoidance and dynamic path planning
This demo shows the collision avoidance between two cars. The obstacle distance is given by infrared and ultrasonic sensors, which is used for dynamic path planning. This demo also shows our techniques in using different sensors.



Adaptive Temporal Encoding Network for Video Instance-level Human Parsing
The demo of “Adaptive Temporal Encoding Network for Video Instance-level Human Parsing” show some results on test set of VIP dataset(http://www.sysu-hcp.net/lip/video_parsing.php). Video instance-level human parsing is the task to not only segment various body parts or clothes but also associate each part with one instance for every frame in the video, which is challenging but with broad application prospects.