AME 3623: Project 9
- All components of the project are due by Tuesday, April 27th
at 5:00 pm
- Groups are the same as for project 1.
- Discussion within groups is fine.
- Discussion across groups may not be about the specifics of the
project solution (general programming/circuit issues are fine to
discuss).
For this project, you will be reading "image slip" data from a set of
cameras and estimating the distance traveled by the hovercraft in a
given period of time. A key part of this project will be the
construction of a mathematical model that captures this relationship.
This model will allow us, in the long run, to estimate the position and velocity of
the hovercraft in a coordinate frame centered on the craft.
At the end of this project, you should be able to:
- connect a digital sensor to your microcontroller,
- read digital data into your microcontroller
- display information through a USB serial interface,
- design mathematical models for transforming raw sensor data into calibrated information,
- implement these models in code, and
- test the models.
Component 0: Library Installation
Create a new project in your Arduino environment; copy over the code
that you have developed so far.
-
Download and install the OpticalFlowCamera.zip library.
To install this library within the Arduino
environment, select
Sketch menu / Include Library / Add .ZIP Library. Navigate to
and select OpticalFlowCamera.zip.
You will not need to repeat this step in the future.
Component 1: Microcontroller Circuit
Your hovercraft is equipped with power circuit and three
downward-looking cameras. When turned on, the power circuit delivers
+5V to the red LEDs that are part of the camera system.
Cameras
The CJMCU-110 Optical Flow Camera pin-outs are shown below:
The common lines that are shared across all cameras (and should be
connected together) are:
- Black: Ground
- Red: +5V Power
- Blue: MISO (Arduino pin 12)
- Orange: MOSI (Arduino pin 11)
- Green: SCL (Arduino pin 13)
- Gray: Reset (choose an unused digital pin)
In addition, each camera has its own Yellow line that is used to select
the camera for communication. Each camera must have a unique digital
output pin assigned to it.
Component 2: Regular Execution of Tasks
Add the following declarations to loop():
// Create a task that will be executed once per 5 ms
PeriodicAction camera_task(5, camera_step);
camera_task.step();
Component 3: Camera Interface Software for Data Collection
Write a data collection program:
- Define the camera interface pins at the top of your INO file.
For example, the following code defines the select pins for
cameras 1, 2 and 3 as Arduino pins 8, 7 and 10, and the reset pin as
pin 9.
////////////////////////////////////////////////////
// Global constants
// Total number of cameras
const int NUM_CAMERAS = 3;
// Select pins for the 3 cameras
const uint8_t CAMERA_SELECT[NUM_CAMERAS] = {8, 7, 10};
// Common reset pin
const uint8_t RESET_PIN = 9;
- Also in the global space, create and configure a new object for
interfacing to the camera system:
/////////////////////////////////////////////////////
// Global variables
// Camera interface object
OpticalFlowCamera cameras(RESET_PIN);
- Also in setup(), initialize each of the cameras. For example, the following
code will configure and initialize camera 2 (index 1 in the array):
int ret = cameras.addCamera(CAMERA_SELECT[1]);
Here, you explicitly declare pin 7 as being the select line for
this particular camera. This function returns a zero (0) if
the camera was properly detected and initialized. If a zero is not returned, then your code should
repeatedly flash the onboard LED in some interesting way. Use a while(1) loop in this error case since we
don't want our code to continue executing if we see this error. See the
OpticalFlowCamera.cpp file for more details (inside of the ZIP file).
- Declare the following global variables that will represent how much the cameras movement the cameras have detected along
their X and Y axes:
int32_t adx[3] = {0,0,0};
int32_t ady[3] = {0,0,0};
- Implement camera_step():
This function will query each of the cameras for
its slip and image quality measures. For example:
int8_t dx, dy;
uint8_t quality;
int result = cameras.readSlip(CAMERA_SELECT[1], dx, dy, quality);
will query the second camera (index 1) for its X and Y slip and image
quality. This function returns a status code. When this
status code is zero, the variables dx, dy and quality are
changed by this function call to reflect the slip since the
last time the camera was queried. This changing of the input
variable values is a feature provided by C++ (and is a new
feature from what you are used to in C).
If the status code is -1, this indicates an overflow (because
dx/dy can only encode pixel changes of -128 ... 127).
If this error does occur, then your function should alert you to the problem.
In this situation, change your code so it increases the rate at which you are querying the camera.
If the status code is -2, this indicates that no slip has been
detected (e.g., the hovercraft has not moved). In this case,
dx/dy/quality should not be used (it is safe to assume that
dx=dy=0).
Your function should accumulate (sum) the slip values using the adx/ady variables.
Note that in C, arrays are pass-by-value (changes to the individual array elements
will be visible in the calling function).
- In report_step():
Component 4: Data Collection
- Set up a test field in the location that you are working in. Ideally:
- There is enough space for your craft to be translated between 40 to 100cm in a straight line.
- The surface is not very reflective.
- The surface has an interesting "visual texture". (without this, the cameras may not detect slip).
- The forward direction of your craft is the power switch. We will call this the +X
direction.
- Place your hovercraft chassis at the start line on one end of your field, facing
forward on the craft to the end line. Clear
the accumulated values. Slowly move your craft to the
end line (do not rotate the craft). Record the accumulated values (6 values in total) in
the following order: camera 1 adx/ady, camera 2 adx/ady and camera 3
adx/ady. In three separate columns, record a movement along the X
direction of your max distance, a movement along the Y direction of
zero meters and a rotation of zero degrees (i.e., these columns
should all be X, 0, 0).
- Repeat this process a total of 10 times.
- Place your hovercraft chassis at the start line, facing with
its left side to the end line (we will call this the +Y
direction). Perform the same procedure as above, while moving
the craft your max distance to its left. Record the accumulated
values, adding three more columns: 0, X, 0.
- Repeat this process a total of 10 times.
- Place your hovercraft chassis anywhere on the surface. Clear the
accumulated values. Rotate the chassis by one full rotation
without translating the center of the chassis.
Record the accumulated values, adding three more columns: 0, 0,
360 (degrees)
- Repeat this process a total of 10 times.
Component 5: Sensor Model
The result of your data collection process is a matrix of 30 rows and
6 columns, representing the accumulated slip measured by each camera
for 30 different cases. This matrix is associated with another matrix of
30 rows and 3 columns, representing the known movement of the chassis along
X, Y and theta for each of the 30 cases. We will use multi-regression
to derive a linear function from the accumulated slip values to the chassis movement values.
Specifically, we wish to solve for functions of the following form:
X = a1 * adx1 + a2 * ady1 + a3 * adx2 + a4 * ady2 + a5 * adx3 + a6 * ady3
where a1 ... a6 are the coefficients of our function, adx?/ady? are
the accumulated slip values for the x and y directions for each
camera, and X is the motion along the chassis' X direction. Note that we
will have corresponding functions (and coefficients) for Y and theta.
See the instructions
for performing multi-regression in Excel.
See the instructions
for configuring Excel for regression if you have not yet used
Excel to perform regression tasks.
In the Summary Output from the regression process, you will
find a column labeled Coefficients. These are the parameters of
the linear model that result from the regression process (a1 ... a6)
from above.
Notes:
- Do not use a bias term in your regression.
- Use all 30 examples in your regression steps for X, Y and theta.
Component 6: Implement the Model
Add one more global variable:
float cartesian_pos[3] = {0.0, 0.0, 0.0};
Implement a function of the form:
void compute_chassis_motion(int32_t adx[3], int32_t ady[3], float[3] cartesian_pos);
where adx and ady are arrays representing the accumulated slip values
from each of the cameras, and cartesian_pos is an array representing the
motion along X, Y and theta, respectively. This function will take as
input adx and ady, and fill in the values for the motion array. The
motion values are in units of meters, meters, and degrees,
respectively. Hence, any change to motion by this function will be
visible to the calling function.
At the end of camera_step(), call this new function.
Update report_step() to also print out the three cartesian_pos values (on the same line as adx/ady).
Component 7: Testing
- For each of the same movements used during data collection,
record the reported X/Y/theta motion.
- Repeat each five times.
- Report in graphical form the mean and standard deviation of the
Cartesian motion for each of the three movement types (move
forward, leftward, rotational). In your report, there will be a total of three
figures: one each for the estimated X, Y and theta values.
Each figure will composed of three bar graphs: one for each of
the movement types.
What to Hand In
All components of the project are due by Tuesday, April 27th at
5:00 pm
- Demonstration/Code Review: All group
members must be present. The demonstration must be completed
by Friday, April 30th.
- Check in the following to your project 9 area of Gradescope:
- Check in the following to the Project 9 Report area of Canvas:
one pdf with the requried figures.
- Personal report: Catme will request that you fill out a
survey about the project. These must be completed by Friday, April 30th
in order to receive credit for the project.
Grading
Project lead credit:
- Each person must be the primary integrator of circuits and code
for two projects over the course of the semester.
For a successful project, we expect:
- A properly configured circuit
- Cameras connected properly
- Properly written software:
- Properly documented code
- Project-level documentation at the top of the ino file:
name and group number, date and project number
- Function-level documentation above the function
definition. Include an abstract description of what the
function does; a list of the names, types and units
associated with each parameter and return value; and the
effects that the function has on the processor or
connected components.
- In-line documentation inside of functions: individual
lines or small groups of lines have an English
description that describes logically what the code is doing
andrewhfagg -- gmail.com
Last modified: Mon Apr 19 00:42:47 2021