Robot Operating System (ROS) is a collection of software frameworks for robot software development, providing operating system-like functionality on a heterogeneous computer cluster.
Aliengo is a robot dog that weighs 19 kg and is one size larger than his 12 kg brother A1 quadruped walking robot developed by Unitree. In addition, it is a medium-sized machine that is a little smaller than a large robot of 30 kg, such as the Boston Dynamics spot.
Equipped with advanced manoeuvrability and expandable machine vision functions, there are many rough terrains such as factories, plants, tunnels and piping facilities, construction sites, agricultural lands, and forests, that it can walk in. It is a robot dog that is expected to be used for patrol monitoring and image inspection at the site. It's not only a robot for industrial use but also a great platform for research fields and entertainment.
Aliengo is a quadruped robot capable of reconnaissance and inspection tasks, especially on rough terrain as it has a great balance capacity. Thanks to this, he can perform complex maneuvers such as somersaults.
Its main features are:
Supper-body long battery life. The maximum operating time can reach 4,5 hours.
Compound control of joints by force control technology. It can realice full control of 3-axis posture and position, so it has strong multi-terrain adaptability and can run stable on rugged gravel roads.
Special movements such as fast running, backflip or jumping.
The maximum walking speed exceeds 1,5 m/s.
The fuselaje has a Good ability to resist impact.
Open sources
Control interface support C/C++, ROS, etc.
Useful interface.
Depth perception vision.
Depth camera: the minimum sensing depth is about 0,11 m up to 1280 x 720 depth resolution.
People recognition and facial recognition.
Aliengo Basic
Aliengo Pro
Equipped with:
2 Depth camera and 1 vision odometer camera;
1 Mini-PC and 1 unit NVIDIA TX2
1 AlienGo battery
2 Depth camera and 1 vision odometer camera;
1 Mini-PC and 1 unit NVIDIA TX2
1 AlienGo battery
2 Lidars
Hardware interfaces:
HDMI × 2,
Ethernet port × 2,
USB ×3,
RS485,
also provides external output voltage of 5V 12V 19V 24V, convenient for users to add external load equipment.
HDMI × 2,
Ethernet port × 2,
USB ×3,
RS485,
also provides external output voltage of 5V 12V 19V 24V, convenient for users to add external load equipment.
Supports:
Support to install M5 rail groove used to support the easy installation of related parts on the back of robot.
High-level and low-level API development, can send speed, position and torque commands to each motor separately.
Human follow, body recognition, depth perception, real-time HD video transmission, vslam, gesture recognition, machine learning
Walk, trot, stand up after falling, roll over, jump, etc.
Support to install M5 rail groove used to support the easy installation of related parts on the back of robot.
High-level and low-level API development, can send speed, position and torque commands to each motor separately.
Human follow, body recognition, depth perception, real-time HD video transmission, vslam, gesture recognition, machine learning
Walk, trot, stand up after falling, roll over, jump, etc.
With the 2 Lidars, Aliengo supports 4 perception functions:
Dynamic obstacle avoidance, Navigation planning, Map building, Selfpositioning. Unitree will take the responsibility to install the lidar with protection bracket on robot.
Field Applications
R&D
Load transport
Inspection
Security
Entertainment
HUMAN POSTURE RECOGNITION TRACKING AND FACE RECOGNITION
1. BODY POSTURE RECOGNITION
The colour camera can identify the specific posture of the person according to the deep learning model, and conduct human-machine interaction. The robot can make corresponding movements according to different body postures.
2. HUMAN SKELETON PERCEPTION
The robot can analyze and calculate the two-dimensional skeleton information of the human body according to the colour information from the perspective, and further analyze and calculate the three-dimensional skeleton information and motion information of a specific character using depths of field.
3. TARGET PERSON TRACKING
When there is more than one person in the scene, someone can tell the robot to lock he/she by a certain posture (for example, raising the left hand). There- after, the robot will follow the movement of the target, even during the movement.
4. FACE RECOGNITION AND APPEARANCE DETERMINATION(UNDER DEVELOPMENT)
From the perspective of the robot, artificial intelligence algorithm is used to automatically conduct face recognition and crowd classification, and it can identify gender, age and outfits.