In the prologue of the drama, the following definition of robots is given: "Robots are not people (Roboti nejju lide). Humanoid robots are by far the most advanced robot systems in the group of biologically inspired robots.
Robot Manipulator
The first three degrees of freedom describe the position of the body, while the other three degrees of freedom determine its orientation. The anthropomorphic type of robotic arm (Figure 1.5) has all three joints of the rotary type and as such is most similar to a human arm.
Industrial Robotics
The second group includes robots that are slaves inside a robot cell. In this situation, the numerically controlled machine in the robot cell can take over the role of the master.
Translational Transformation
Rotational Transformation
The first three rows of the transformation matrix correspond to the thex,y and zaxes of the reference frame, while the first three columns refer to the thex′,y′, andz′ axes of the rotated frame. The elements of the rotation matrix are the cosines of the angles between the axes given by the corresponding column and row.
Pose and Displacement
The three displacements of the reference frame result in the same final position as shown in Fig.2.5. The displacement resulted in a new pose of the object and new framex′–y′–z′ shown in Fig.2.7.
Geometrical Robot Model
Our task will be to calculate the pose of the frame x3–y3–z3 with respect to the reference frame x0–y0– z0. The pose of the fourth block can be written with respect to the first by the following matrix.
Vector Parameters of a Kinematic Pair
When the robot mechanism is in the initial position, the total angle is zero ϑi=0, and the coordinate frame is xi–yi–zi and xi−1–yi−1–zi−1. In this case, the coordinate framesxix–yi–ziandxi−1–yi−1–zi−1 are parallel regardless of the value of the translation variable.
Vector Parameters of the Mechanism
The idea of orientation in robotics is mostly related to the orientation of the robot gripper. The matrix describing the orientation of the gripper with respect to the reference frame0–y0–z0 has the following form.
Kinematics
The relation between the robot endpoint velocities and joint velocities is obtained by differentiation. The angle of the second segment of the two-segment manipulator is calculated as an inverse trigonometric function.
Statics
In this way, we obtained an important relationship between the joint torques and the forces on the robot's end effector. It will be used in the control of a robot that is in contact with the environment.
Workspace
The largest working area of the two-segment mechanism occurs for equal lengths of both segments. Let's look at the influence of the second angleϑ2 on the area of the working surface.
Dynamics
A calculation of the accelerations therefore comes down to determining the forces on the two “particles”. The torque of the base is equal to the torque M1 of the actuator in the first joint.
Characteristics of Parallel Robots
72 6 Parallel Robots is limited by the constraints imposed by joints, so that the number of degrees of freedom of a robot mechanism is . The total number of degrees of freedom in joints is 16 (3 rotational joints, 2 translational joints, 1 universal and 3 spherical joints).
Kinematic Arrangements of Parallel Robots
There is also an independent middle leg R0U0T0U0 which has no effect on platform movement. Without considering the middle leg, the number of degrees of freedom of the mechanism is
Modelling and Design of Parallel Robots
The size of the workspace in parallel robots is limited by the ranges in displacements e. 82 6 Parallel Robot legs, passive joint displacements and, in particular, from the interference between robot legs.
Principles of Sensing
However, due to the complexity of human sensing, robot sensing is limited to fewer sensors. Optical sensors—use light when the signals are converted; an example of such a sensor is the optical encoder.
Sensors of Movement
- Placing of Sensors
- Potentiometer
- Optical Encoder
- Magnetic Encoder
- Tachometer
- Inertial Measurement Unit
Let us assume that point B represents the reference position of the potentiometer belonging to the link. The angle of the wiper relative to the reference position B is indicated by ϑ (in radians).
Contact Sensors
- Tactile Sensor
- Limit Switch and Bumper
- Force and Torque Sensor
- Joint Torque Sensor
In a more advanced case, we use force sensors to control the force between the robot's end effector and the environment. During contact of the robot with the environment, the beams are deformed by the external forces, which changes the resistance of the strain gauges.
Proximity and Ranging Sensors
Ultrasonic Rangefinder
Laser Rangefinder and Laser Scanner
System Configuration
Forward Projection
We will therefore find the geometric relationship between the coordinates of the point P =(xc,yc,zc) in space and the coordinates of the point p=(u,v) in the image. In this way we obtained the relationship between the coordinates (xc,yc,zc), for the point P in the camera frame and the coordinates (xi,yi), for the point p in the image plane.
Backward Projection
Single Camera
It is used in the calibration process to determine both internal and external camera parameters. These equations can be obtained from the size of the triangle represented by points A, B and C.
Stereo Vision
Figure 8.7a shows the top view, while Fig.8.7b shows the side view of the situation in Fig.8.6. The top picture shows a top view of both cameras, while the bottom picture shows a side view of the cameras.
Image Processing
Object Pose from Image
Camera Calibration
120 8 Robot Vision The pose of the calibration patterniHcp expressed in the image coordinate frame xi–yi–zi is the result of image processing. The attitude of the calibration pattern Hcp expressed in the robot coordinate frame x–y–z can be determined with the calibration point at the robot end effector and the calibration points marked on the calibration pattern.
Object Pose
In the simplest task we only define the start and end point of the end effector of the robot. The inverse kinematic model is then used to calculate the joint variables corresponding to the desired end effector position of the robot.
Interpolation of the Trajectory Between Two Points
The velocity at the end of the initial parabolic phase must be equal to the constant velocity in the linear phase. The speed at the end of the first stage (9.3) is equal to the constant speed in the second stage (9.4).
Interpolation by Use of via Points
The speed at the end of the time interval t1 must be the same as the speed in the first. The vector describing the actual pose of the robot end effector generally comprises six variables.
Control of the Robot in Internal Coordinates
PD Control of Position
The calculated control input provokes the movement of the robot in the direction of reducing the position error. With the right choice of Kd gains, the critical damping of robot systems is obtained.
PD Control of Position with Gravity Compensation
The control method shown in Fig.10.2 provides high damping of the system in the fastest part of the track, which is usually not necessary. These forces are then generated by the robot motors regardless of the positional error signal.
Control of the Robot Based on Inverse Dynamics
The model for gravitational effects g(q)ˆ (the circumflex denotes the robot model), which is a good approximation of the actual gravitational forces g(q), can be implemented in the control algorithm shown in Fig.10.4. The inertial matrix B(q)ˆ is an approximation of the real valuesB(q), while n(q,ˆ q)˙ represents an approximation of n(q,q), as follows˙. 10.13) The output of the control unit is determined by the following equation.
Control of the Robot in External Coordinates
- Control Based on the Transposed Jacobian Matrix
- Control Based on the Inverse Jacobian Matrix
- PD Control of Position with Gravity Compensation
- Control of the Robot Based on Inverse Dynamics
The starting point will be Eq. 10.21), which expresses the error of the attitude of the end effector. The velocity of the robot endpoint is calculated from the joint velocities using the Jacobian matrix.
Control of the Contact Force
Linearization of a Robot System Through Inverse
Since the forces acting on the end effector of the robot are transformed into the joint moments using the transposed Jacobian matrix (5.18), we can write the dynamic model of the robot in the following form. 10.5) we added the component JT(q) representing the force of interaction with the environment. The difference between Eq. 10.46) and (10.14), representing the control based on inverse dynamics in internal coordinates, is the component JT(q)f that compensates for the influence of external forces on the robot mechanism.
Force Control
10.48) The matrices BcandFcdetermine the movement of the object under the influence of the force˜f. 10.48) the acceleration of the virtual object can be calculated. The control system incorporating contact force control, parallel assembly and control of the robot based on inverse dynamics in external coordinates is shown in Fig.10.16.
Robot Safety
When programming or teaching a robot, the human operator must be in the robot's work area. Level 2 includes a level of protection while an operator is in the robot's work area.
Robot Peripherals in Assembly Processes
Assembly Production Line Configurations
Feeding Devices
The reliable operation of the feeding devices is of great importance in robotic cells without robotic vision. The simplest way to bring parts to the robot cell is represented by an installation table.
Conveyors
With the belt-driven conveyor, the upper part of the belt drives pallets or other objects or material (Fig.11.13). The advantage of the conveyor belt with rollers is in low collision forces that occur between the pallets or objects handled by the conveyor belt.
Robot Grippers and Tools
An intermediate grip is also possible where the object is gripped on inner and outer surfaces (Fig. 11.21). The small nipples on the head on the right side of Fig.11.22 prevent damage to the surface of the object.
Collaborative Industrial Robot System
Outside of the collaborative workspace, the robot can act as a traditional industrial robot without any specific limitations other than those that are task-related. In collaborative robotic operations, operators can work directly near the robotic system while the system is active, and physical contact between an operator and the robotic system can occur within the collaborative workspace.
Collaborative Robot
The robot can ignore the contact and follow the reference trajectory, or the robot can be stopped. Programming collaborative robots is simple, mostly done by manual guidance, so the use of the robot is very flexible; the robot can be up and running at a new workstation in a very short time.
Collaborative Operation
- Safety-Rated Monitored Stop
- Hand Guiding
- Speed and Separation Monitoring
- Power and Force Limiting
The robot system is ready for manual control when it enters the collaborative workspace and issues a safety-rated monitored stop. There are two possible types of contact between the moving part of the robot system and areas on the operator's body.
Collaborative Robot Grippers
The force and force-limiting method can be used in collaborative applications where the presence of the operator is often required, in time-dependent operations (where delays due to safety stops are undesirable, but can prevent physical contact between the robot system and the operator), and applications with small parts and great variability in assembly. Future gripper design will evolve from user programming to grippers that can be automatically adjusted depending on parts and applications.
Applications of Collaborative Robotic System
The robot stops at the interface window and can then be manually moved outside the interface. The operator guides the robot by hand along a path in a task-specific work area at reduced speed.
Mobile Robot Kinematics
In the car-like problem, the orientation of the mobile robot is defined by the angle ϕ. Following the same principle as in (13.7), the translational velocity of the unicycle can be defined as
Navigation
Localization
Laser guidance technology uses multiple, fixed reference points (reflective strips) located within the operational area that can be located by a laser head mounted on the vehicle (Fig.13.8c). Natural navigation is based on information from the existing environment scanned by laser scanners, using a few fixed reference points (Fig.13.8d).
Path Planning
Path Control
Biped Locomotion
Zero-Moment Point
Generation of Walking Patterns
Imitation Learning
Observation of Human Motion and Its Transfer
Dynamic Movement Primitives
Convergence Properties of Linear Dynamic
Dynamic Movement Primitives for Point-to-Point
Estimation of DMP Parameters from a Single
Modulation of DMPs