Exam Firstpage
Exam Firstpage
Classification of sensors:
• Proprioceptive sensors: Measuring the internal state of the system (or robot). E.g. Heading sensors, IMU, econders.
• Exteroceptive Sensors: Measuring the external environment what’s happening outside the system (or robot). E.g. LIDAR, Sonar, GPS.
Passive sensors: Measure energy coming front the environment, very much influenced by the environment. E.g. compass, gyroscopes, contact switches.
Active sensors:Emit their proper energy and measure the reaction Better performance , but some influence on environment Eg. optical encoders,
magnetic encoders
Difference between RGB and Depth Image: RGB image made up of 3 colour channels, where each pixel contains 3 values representing the intensity
of each colour (e.g. image classification). Depth image is a grayscale image, where each pixel value represents a distance, showing how far an object is at
that pixel from the camera (e.g.3D mapping, obstacle detection).
Kalman Filter: An efficient recursive filter estimating the internal state of a linear dynamic system from a series of noisy measurements.
Dynamic Model: Relationship between the input and output.
System State: The set of parameters which describe the current state of the system at any particular time.
Measurements Noise: The difference between the measured data and the reality. for example the ground being uneven can affect the radars capacity
to measure properly.
Process Noise: the difference between the actual state of the system and the predicted state. it accounts for the unknown or unpredictable changes in
the system like wind.
1
1. INPUT: Initial Measurement from sensor = zn 4. UPDATE: State update (position): x̂n,n = x̂n,n−1 + N (zn −x̂n,n−1 ),
2. PREDICT: Dynamic Model: x̂n+1,n = x̂n,n + ∆tẋ ˆn,n where Kn = N1
ˆn+1,n = ẋ
If velocity is constant: ẋ ˆn,n , where ẋ = v (z −x̂n,n−1 )
State update (velocity): x̂n,n = x̂n,n−1 + β n ∆t
Covariance Extrapolation (To predict uncertainty): pn+1,n = pn,n
p 5. Uncertainty Update: pn,n = (1 − K n )p n,n−1
3. Kalman Gain: Kn = p n,n−1
n,n−1 +r
DEEP LEARNING FOR ROBOT PERCEPTION
Loss Functions: to evaluate to what extent the actual outputs y are correctly predicted by the model outputs How to find size of the output layer
W2 xH2 xK
Input: W1 xH1 xC
• W2 = ( W1 −F S
+2P
)+1 • H2 = ( H2 −F
S
+2P
)+1 • K= number of filters
To find number of params in the layer:
• F 2 CK = Num. of params in each filter • F 2 CK ∗ K = Num. of params in the layer
Grasp Detection: Input: RGB image or 3D Point cloud. Output: Grasp location. Assumptions: Object is static & on a table. Constraints: Real-time,
Accuracy. Two small tasks: 2D Grasp detection (visual) & Motion planning (action)
Grasp Detection with CNN: Given an image of an object we want to find a way to safely pick up and hold that object.
• We define a “grasp” with 5 values (x, y, θ, width, height)
1 Pn
• Loss function: MSE = n i=1 (Yi − Ŷi )
2