• Tidak ada hasil yang ditemukan

Robotics

N/A
N/A
Nguyễn Gia Hào

Academic year: 2023

Membagikan "Robotics"

Copied!
249
0
0

Teks penuh

In the prologue of the drama, the following definition of robots is given: "Robots are not people (Roboti nejju lide). Humanoid robots are by far the most advanced robot systems in the group of biologically inspired robots.

Fig. 1.1 Classification of robots
Fig. 1.1 Classification of robots

Robot Manipulator

The first three degrees of freedom describe the position of the body, while the other three degrees of freedom determine its orientation. The anthropomorphic type of robotic arm (Figure 1.5) has all three joints of the rotary type and as such is most similar to a human arm.

Fig. 1.3 Rotational (left) and translational (right) robot joint
Fig. 1.3 Rotational (left) and translational (right) robot joint

Industrial Robotics

The second group includes robots that are slaves inside a robot cell. In this situation, the numerically controlled machine in the robot cell can take over the role of the master.

Fig. 1.6 SCARA robot arm
Fig. 1.6 SCARA robot arm

Translational Transformation

Rotational Transformation

The first three rows of the transformation matrix correspond to the thex,y and zaxes of the reference frame, while the first three columns refer to the thex′,y′, andz′ axes of the rotated frame. The elements of the rotation matrix are the cosines of the angles between the axes given by the corresponding column and row.

Fig. 2.2 Rotation around x axis
Fig. 2.2 Rotation around x axis

Pose and Displacement

The three displacements of the reference frame result in the same final position as shown in Fig.2.5. The displacement resulted in a new pose of the object and new framex′–y′–z′ shown in Fig.2.7.

Fig. 2.5 The pose of an arbitrary frame x ′ –y ′ –z ′ with respect to the reference frame x–y–z
Fig. 2.5 The pose of an arbitrary frame x ′ –y ′ –z ′ with respect to the reference frame x–y–z

Geometrical Robot Model

Our task will be to calculate the pose of the frame x3–y3–z3 with respect to the reference frame x0–y0– z0. The pose of the fourth block can be written with respect to the first by the following matrix.

Fig. 2.8 Mechanical assembly
Fig. 2.8 Mechanical assembly

Vector Parameters of a Kinematic Pair

When the robot mechanism is in the initial position, the total angle is zero ϑi=0, and the coordinate frame is xi–yi–zi and xi−1–yi−1–zi−1. In this case, the coordinate framesxix–yi–ziandxi−1–yi−1–zi−1 are parallel regardless of the value of the translation variable.

Fig. 3.1 Robot mechanism with coordinate frames attached to its segments
Fig. 3.1 Robot mechanism with coordinate frames attached to its segments

Vector Parameters of the Mechanism

The idea of ​​orientation in robotics is mostly related to the orientation of the robot gripper. The matrix describing the orientation of the gripper with respect to the reference frame0–y0–z0 has the following form.

Fig. 3.4 Robot mechanism with four degrees of freedom
Fig. 3.4 Robot mechanism with four degrees of freedom

Kinematics

The relation between the robot endpoint velocities and joint velocities is obtained by differentiation. The angle of the second segment of the two-segment manipulator is calculated as an inverse trigonometric function.

Fig. 5.1 Planar two-segment robot manipulator
Fig. 5.1 Planar two-segment robot manipulator

Statics

In this way, we obtained an important relationship between the joint torques and the forces on the robot's end effector. It will be used in the control of a robot that is in contact with the environment.

Workspace

The largest working area of ​​the two-segment mechanism occurs for equal lengths of both segments. Let's look at the influence of the second angleϑ2 on the area of ​​the working surface.

Fig. 5.6 Two-segment robot manipulator
Fig. 5.6 Two-segment robot manipulator

Dynamics

A calculation of the accelerations therefore comes down to determining the forces on the two “particles”. The torque of the base is equal to the torque M1 of the actuator in the first joint.

Fig. 5.11 Parameters of the planar, two-segment robot manipulator, which moves in the vertical x– y plane
Fig. 5.11 Parameters of the planar, two-segment robot manipulator, which moves in the vertical x– y plane

Characteristics of Parallel Robots

72 6 Parallel Robots is limited by the constraints imposed by joints, so that the number of degrees of freedom of a robot mechanism is . The total number of degrees of freedom in joints is 16 (3 rotational joints, 2 translational joints, 1 universal and 3 spherical joints).

Fig. 6.1 Serial kinematic chain (left) and closed kinematic chain (right)
Fig. 6.1 Serial kinematic chain (left) and closed kinematic chain (right)

Kinematic Arrangements of Parallel Robots

There is also an independent middle leg R0U0T0U0 which has no effect on platform movement. Without considering the middle leg, the number of degrees of freedom of the mechanism is

Fig. 6.4 The Stewart-Gough platform
Fig. 6.4 The Stewart-Gough platform

Modelling and Design of Parallel Robots

The size of the workspace in parallel robots is limited by the ranges in displacements e. 82 6 Parallel Robot legs, passive joint displacements and, in particular, from the interference between robot legs.

Fig. 6.8 Kinematic parameters of a parallel robot
Fig. 6.8 Kinematic parameters of a parallel robot

Principles of Sensing

However, due to the complexity of human sensing, robot sensing is limited to fewer sensors. Optical sensors—use light when the signals are converted; an example of such a sensor is the optical encoder.

Sensors of Movement

  • Placing of Sensors
  • Potentiometer
  • Optical Encoder
  • Magnetic Encoder
  • Tachometer
  • Inertial Measurement Unit

Let us assume that point B represents the reference position of the potentiometer belonging to the link. The angle of the wiper relative to the reference position B is indicated by ϑ (in radians).

Fig. 7.2 Mounting of the sensor of movement behind the reducer
Fig. 7.2 Mounting of the sensor of movement behind the reducer

Contact Sensors

  • Tactile Sensor
  • Limit Switch and Bumper
  • Force and Torque Sensor
  • Joint Torque Sensor

In a more advanced case, we use force sensors to control the force between the robot's end effector and the environment. During contact of the robot with the environment, the beams are deformed by the external forces, which changes the resistance of the strain gauges.

Fig. 7.9 Tactile sensor used in robot finger (left) and as robot skin (right)
Fig. 7.9 Tactile sensor used in robot finger (left) and as robot skin (right)

Proximity and Ranging Sensors

Ultrasonic Rangefinder

Laser Rangefinder and Laser Scanner

System Configuration

Forward Projection

We will therefore find the geometric relationship between the coordinates of the point P =(xc,yc,zc) in space and the coordinates of the point p=(u,v) in the image. In this way we obtained the relationship between the coordinates (xc,yc,zc), for the point P in the camera frame and the coordinates (xi,yi), for the point p in the image plane.

Fig. 8.2 Equivalent image plane
Fig. 8.2 Equivalent image plane

Backward Projection

Single Camera

It is used in the calibration process to determine both internal and external camera parameters. These equations can be obtained from the size of the triangle represented by points A, B and C.

Stereo Vision

Figure 8.7a shows the top view, while Fig.8.7b shows the side view of the situation in Fig.8.6. The top picture shows a top view of both cameras, while the bottom picture shows a side view of the cameras.

Fig. 8.6 Stereo view of point Q using two parallel cameras
Fig. 8.6 Stereo view of point Q using two parallel cameras

Image Processing

Object Pose from Image

Camera Calibration

120 8 Robot Vision The pose of the calibration patterniHcp expressed in the image coordinate frame xi–yi–zi is the result of image processing. The attitude of the calibration pattern Hcp expressed in the robot coordinate frame x–y–z can be determined with the calibration point at the robot end effector and the calibration points marked on the calibration pattern.

Fig. 8.9 Transformations used for camera calibration
Fig. 8.9 Transformations used for camera calibration

Object Pose

In the simplest task we only define the start and end point of the end effector of the robot. The inverse kinematic model is then used to calculate the joint variables corresponding to the desired end effector position of the robot.

Fig. 8.10 Transformations used for object pose computation
Fig. 8.10 Transformations used for object pose computation

Interpolation of the Trajectory Between Two Points

The velocity at the end of the initial parabolic phase must be equal to the constant velocity in the linear phase. The speed at the end of the first stage (9.3) is equal to the constant speed in the second stage (9.4).

Fig. 9.1 The time dependence of the joint variables with trapezoidal velocity profile
Fig. 9.1 The time dependence of the joint variables with trapezoidal velocity profile

Interpolation by Use of via Points

The speed at the end of the time interval t1 must be the same as the speed in the first. The vector describing the actual pose of the robot end effector generally comprises six variables.

Fig. 9.2 Trajectory interpolation through n via points—linear segments with parabolic transitions are used
Fig. 9.2 Trajectory interpolation through n via points—linear segments with parabolic transitions are used

Control of the Robot in Internal Coordinates

PD Control of Position

The calculated control input provokes the movement of the robot in the direction of reducing the position error. With the right choice of Kd gains, the critical damping of robot systems is obtained.

PD Control of Position with Gravity Compensation

The control method shown in Fig.10.2 provides high damping of the system in the fastest part of the track, which is usually not necessary. These forces are then generated by the robot motors regardless of the positional error signal.

Control of the Robot Based on Inverse Dynamics

The model for gravitational effects g(q)ˆ (the circumflex denotes the robot model), which is a good approximation of the actual gravitational forces g(q), can be implemented in the control algorithm shown in Fig.10.4. The inertial matrix B(q)ˆ is an approximation of the real valuesB(q), while n(q,ˆ q)˙ represents an approximation of n(q,q), as follows˙. 10.13) The output of the control unit is determined by the following equation.

Fig. 10.4 PD control with gravity compensation
Fig. 10.4 PD control with gravity compensation

Control of the Robot in External Coordinates

  • Control Based on the Transposed Jacobian Matrix
  • Control Based on the Inverse Jacobian Matrix
  • PD Control of Position with Gravity Compensation
  • Control of the Robot Based on Inverse Dynamics

The starting point will be Eq. 10.21), which expresses the error of the attitude of the end effector. The velocity of the robot endpoint is calculated from the joint velocities using the Jacobian matrix.

Fig. 10.9 Control based on the transposed Jacobian matrix
Fig. 10.9 Control based on the transposed Jacobian matrix

Control of the Contact Force

Linearization of a Robot System Through Inverse

Since the forces acting on the end effector of the robot are transformed into the joint moments using the transposed Jacobian matrix (5.18), we can write the dynamic model of the robot in the following form. 10.5) we added the component JT(q) representing the force of interaction with the environment. The difference between Eq. 10.46) and (10.14), representing the control based on inverse dynamics in internal coordinates, is the component JT(q)f that compensates for the influence of external forces on the robot mechanism.

Force Control

10.48) The matrices BcandFcdetermine the movement of the object under the influence of the force˜f. 10.48) the acceleration of the virtual object can be calculated. The control system incorporating contact force control, parallel assembly and control of the robot based on inverse dynamics in external coordinates is shown in Fig.10.16.

Fig. 10.14 Robot control based on inverse dynamics in external coordinates including the contact force
Fig. 10.14 Robot control based on inverse dynamics in external coordinates including the contact force

Robot Safety

When programming or teaching a robot, the human operator must be in the robot's work area. Level 2 includes a level of protection while an operator is in the robot's work area.

Fig. 11.1 Level 1: mechanical robot cell protection
Fig. 11.1 Level 1: mechanical robot cell protection

Robot Peripherals in Assembly Processes

Assembly Production Line Configurations

Feeding Devices

The reliable operation of the feeding devices is of great importance in robotic cells without robotic vision. The simplest way to bring parts to the robot cell is represented by an installation table.

Fig. 11.6 Simultaneous loading of a fixture table
Fig. 11.6 Simultaneous loading of a fixture table

Conveyors

With the belt-driven conveyor, the upper part of the belt drives pallets or other objects or material (Fig.11.13). The advantage of the conveyor belt with rollers is in low collision forces that occur between the pallets or objects handled by the conveyor belt.

Robot Grippers and Tools

An intermediate grip is also possible where the object is gripped on inner and outer surfaces (Fig. 11.21). The small nipples on the head on the right side of Fig.11.22 prevent damage to the surface of the object.

Fig. 11.14 Conveyor with rollers
Fig. 11.14 Conveyor with rollers

Collaborative Industrial Robot System

Outside of the collaborative workspace, the robot can act as a traditional industrial robot without any specific limitations other than those that are task-related. In collaborative robotic operations, operators can work directly near the robotic system while the system is active, and physical contact between an operator and the robotic system can occur within the collaborative workspace.

Fig. 12.1 Maximum workspace (limited by dotted line), restricted workspace (limited by dashed line), operating workspace (grey areas), and collaborative workspace (dark grey area)
Fig. 12.1 Maximum workspace (limited by dotted line), restricted workspace (limited by dashed line), operating workspace (grey areas), and collaborative workspace (dark grey area)

Collaborative Robot

The robot can ignore the contact and follow the reference trajectory, or the robot can be stopped. Programming collaborative robots is simple, mostly done by manual guidance, so the use of the robot is very flexible; the robot can be up and running at a new workstation in a very short time.

Fig. 12.3 Design features of a collaborative robot
Fig. 12.3 Design features of a collaborative robot

Collaborative Operation

  • Safety-Rated Monitored Stop
  • Hand Guiding
  • Speed and Separation Monitoring
  • Power and Force Limiting

The robot system is ready for manual control when it enters the collaborative workspace and issues a safety-rated monitored stop. There are two possible types of contact between the moving part of the robot system and areas on the operator's body.

Table 12.2 Robot actions for safety-rated monitored stop
Table 12.2 Robot actions for safety-rated monitored stop

Collaborative Robot Grippers

The force and force-limiting method can be used in collaborative applications where the presence of the operator is often required, in time-dependent operations (where delays due to safety stops are undesirable, but can prevent physical contact between the robot system and the operator), and applications with small parts and great variability in assembly. Future gripper design will evolve from user programming to grippers that can be automatically adjusted depending on parts and applications.

Applications of Collaborative Robotic System

The robot stops at the interface window and can then be manually moved outside the interface. The operator guides the robot by hand along a path in a task-specific work area at reduced speed.

Fig. 12.10 Conceptual applications of collaborative robots: a hand-over window, b interface win- win-dow, c collaborative workspace, d inspection, and e hand-guided robot (ISO 10218-2:2011)
Fig. 12.10 Conceptual applications of collaborative robots: a hand-over window, b interface win- win-dow, c collaborative workspace, d inspection, and e hand-guided robot (ISO 10218-2:2011)

Mobile Robot Kinematics

In the car-like problem, the orientation of the mobile robot is defined by the angle ϕ. Following the same principle as in (13.7), the translational velocity of the unicycle can be defined as

Fig. 13.3 Mobile robot configuration examples: a two-wheel differential drive, b differential drive with castor wheel, c three synchronously motorized and steered wheels, d three omnidirectional wheels in triangle, e four wheels with car-like steering, f t
Fig. 13.3 Mobile robot configuration examples: a two-wheel differential drive, b differential drive with castor wheel, c three synchronously motorized and steered wheels, d three omnidirectional wheels in triangle, e four wheels with car-like steering, f t

Navigation

Localization

Laser guidance technology uses multiple, fixed reference points (reflective strips) located within the operational area that can be located by a laser head mounted on the vehicle (Fig.13.8c). Natural navigation is based on information from the existing environment scanned by laser scanners, using a few fixed reference points (Fig.13.8d).

Fig. 13.8 Sensor abstraction disk from the suit of sensors on board the robot
Fig. 13.8 Sensor abstraction disk from the suit of sensors on board the robot

Path Planning

Path Control

Biped Locomotion

Zero-Moment Point

Generation of Walking Patterns

Imitation Learning

Observation of Human Motion and Its Transfer

Dynamic Movement Primitives

Convergence Properties of Linear Dynamic

Dynamic Movement Primitives for Point-to-Point

Estimation of DMP Parameters from a Single

Modulation of DMPs

Gambar

Fig. 1.1 Classification of robots
Fig. 2.5 The pose of an arbitrary frame x ′ –y ′ –z ′ with respect to the reference frame x–y–z
Fig. 4.1 Orientation of the coordinate frame x 1 –y 1 –z 1 with respect to the reference coordinate frame x 0 –y 0 –z 0
Fig. 5.11 Parameters of the planar, two-segment robot manipulator, which moves in the vertical x– y plane
+7

Referensi

Dokumen terkait

Model results using the MACRO model (User 2) for the short test period, 25 December 1990 to 14 January 1991: Uncalibrated modelled (dotted line) calibrated modelled (dashed line)

• Dotted objects (line segments, arrow lines, polygons, circles, arcs) are obtai- ned by using the colour dark green.. • Quadratic and cubic B´ ezier curves are entered as blue

Zone 13 starts with one pair of dotted bands, followed by two pairs of diffused dotted bands, one curved black band, one dotted band, two thick promi- nent

The results using the original data (solid line), imputed data (dashed line), and augmented data (dotted line) are all shown using data with onset date no later than the value in the

b Maximum upper line and minimum lower line amplitudes of the ground-state nonlinear Bloch wave vs chemical potential and the approximate solution dotted, over- lying solid lines; gap

Dimensionless energy relaxation function St of a one-dimensional channel with constant number density at different system sizes 关NL⫽50 共continuous line兲,NL⫽100共dashed-dotted

Representative histograms of HLA-C stabilization on 221-ICP47- C*03:04 cells in the presence of no peptide solid grey, control peptide GKL GAVDPLLKL, dotted line, control peptide GAL

The grey band is the laboratory normal range, the continuous black line is the subject’s left side, and the dotted line is the subject’s right side... E-2 Sagittal kinematic traces of