Amanda Trang is a junior in ECE with an interest in electronics, robotics, and game design. Her hobbies include playing piano, rock climbing, and video games– she also enjoys taking mediocre photos of dogs (ask her to show you photos of her dog!).
David Kim is a senior in ECE with interests in software and hardware. He likes to play basketball and also likes to play the drums during his free time. He enjoys listening to music and has some interests in musical instruments and music in general.
Dylan Machado is a junior in ECE with interests in hardware in software. He enjoys traveling, food, and dogs.
Emmett Milliken is a junior in ECE with the future goal of going in to research and development of entertainment lighting fixtures. He loves colors and moving things you can program, and automated entertainment lighting is a great fusion of those. In addition to lighting, he is also interested in all other kinds of entertainment/production/stage tech, including audio and special effects equipment. When he is not doing school or tech related things, he enjoys hiking, kayaking, quilting, and finding any excuse to use label makers and power tools.
Sofya Calvin is a junior in ECE interested in pursuing a career in energy, more specifically regarding renewable energy, (smart) power grids, and energy storage. She hopes to one day bridge her technical skills with her political knowledge and work to shape the future of energy policy. In her spare time, Sofya loves to dance and is a member of two dance teams on campus. She also enjoys working out, listening to music, attending concerts, and being spontaneous/adventurous.
Lorem ipsum dolor sit amet consectetur.
Use this area to describe your project. Lorem ipsum dolor sit amet, consectetur adipisicing elit. Est blanditiis dolorem culpa incidunt minus dignissimos deserunt repellat aperiam quasi sunt officia expedita beatae cupiditate, maiores repudiandae, nostrum, reiciendis facere nemo!
ECE 3400, Fall 2017, Group 3
Date | Leader |
---|---|
Sept 1st - Sept 19th | Amanda Trang |
Sept 20th - Oct 9th | Emmett Milliken |
Oct 10th - Oct 29th | David Kim |
Oct 30th - Nov 17th | Dylan Machado |
Nov 18th - Dec 5th | Sofya Calvin |
I participated in formulating the standards, roles, and procedures as stated in this contract. I understand that I am obligated to abide by these terms and conditions. I understand that if I do not abide by these terms and conditions, I will suffer the consequences as stated in this contract.
Signature | Date |
---|---|
Amanda Trang | 09/01/2017 |
David Kim | 09/01/2017 |
Emmett Milliken | 09/01/2017 |
Dylan Machado | 09/01/2017 |
Sofya Calvin | 09/01/2017 |
Elder care is becoming a more prevalent issue as the elderly population increases, but the caregiver population stays constant. The solution the Japanese government is coming to is to apply advancing robot technology to assist the elderly. These robots are being to created to do a multitude of tasks, such as turning off lights, transforming into wheelchairs, lifting people, and simply providing company.
The stakeholder in this situation is the Japanese government: according to Business Insider, a third of the government’s budget is going towards the development of these robots, coined “carebots.” While European and American countries are not struggling with the same age imbalance Japan is, funds and research are also occurring in those regions. Other parties of note are the electronics corporations and organizations involved in developing carebots that will compete to have the most marketable bot–Honda, Panasonic, RIKEN, CT Asia Robotics, among countless others–as well as the International Organization for Standardization (ISO), who must continuously develop standards for human-robot interaction as the technology advances. Individual engineers will be involved in the direct research and development of the robots, constantly making decisions that will directly affect lives once the products are on the market. Finally, the consumer public will have the most direct impact. Families purchasing carebots will see the most immediate effects, whether that is improved elder care or dealing with technical issues and accidents. Less directly, nursing home businesses will have to deal with competition from non-human sources, and family culture could change drastically.
Under utilitarianism, the goal would be to maximize the happiness of all parties equally. Since robots have no way of showing emotion, this means making both the elderly and their respective families and caretakers happy. One could argue that having exclusively robots manage elder care would lead to the greatest amount of happiness among everyone, since seniors would (theoretically) be in good hands and their families would be able to focus on their own lives. However, it’s likely that the lack of human interaction would actually hurt seniors, meaning this would not lead to the most happiness for everyone. The best setup, therefore, is probably one where the elderly is taken care of by other people on occasion too. While there is less overall utility, since the caretaker has to use their time on someone else, the amount of total utility split between parties is much more equal.
It can be argued that human-human interaction is often seen as more virtuous than robot-human interaction. The robot-human interactions lack the personal and emotional relationship and bonding that comes with a human-human interaction. Therefore the idea of shifting senior care to robots can be seen as partially failing the virtue test. The shift into senior robot care would deprive the elders of what little human-human interactions they already have. However, it can also be argued that the senior robot care system was the most cost effective way to tackle the problem of the decreasing number of senior caregivers. Although the shift in elder care to robots may not be the ideal way that elders should be taken care of, it is the most cost effect solution to the problem that people are no longer stepping up to care for seniors. The robot care system does not go strongly against moral standards either, and therefore it can be concluded that it does not fail the virtue test.
If only robots are to care for the elderly, the system fails or passes the justice test depending on one’s views. If robots are not considered a being that carries a burden of work, then this passes the justice test. However, if they do carry the burdens of work, then having robots care for seniors fails the justice test because nobody else is doing their fair share of work. The solution would be to make a plan to distribute work equally among robots, workers, and family members so everyone contributes to serving the elderly. Under that system where everyone is pulling their weight, the justice system passes in all cases.
There are several economic challenges to consider with robots. Since this is still a relatively new technology undergoing development, there is a high cost associated with it. This high barrier of entry to the technology means only the wealthy would have access to these caretakers. Those less fortunate therefore cannot consider this an option for elder care. The only solution would be to offset the costs of these robots, possibly through government subsidization or third party assistance. In addition, most of the ethics tests suggest that the ideal balance in elder care would be some combination of human caretakers and robots. However, this means a consumer would have to pay twice for care, raising costs even more. Again, the only solution would be to somehow offset the total cost for care.
Historically, society has also been hesitant to accept new technology, especially from older generations. This could lead to people being afraid to use robots in elder care, even if they are competitively priced on the market. A simple mistake could lead to something as disastrous as death. To combat this, there needs to be as much transparency and communication about the technology as possible. Manufacturers would likely have to emphasize the safety of their products through demonstrations, tests, and more. There would also likely be a need for heavy regulation in this industry to calm the public’s fears. This would have to be maintained by the government to push safety and innovation among products. The downside of such a regulatory body would be the increased taxes from running it.
All in all, there are many aspects to consider in relation to treating elders with robotic caretakers. Although there are still issues with the technology, it has the potential to be revolutionary in the future if society is able to solve the ethical problems associated with it. On the whole, our group has faith that given an appropriate amount of research and regulation, this technology has the potential to ease and enhance thousands of lives and families. Assuming this ensues, it will then be largely up to the companies producing these robots to convince the general public of the numerous benefits.
Team 1: Amanda Trang, Emmett Milliken
Team 2: David Kim, Dylan Machado, Sofya Calvin
The goal of this lab was to introduce the concepts of the Arduino IDE as well as the Arduino Uno microcontroller itself. Additionally, we formed a basic structure for the robot and added a simple autonomous function.
We used the Arduino IDE to program write and upload our code to the Uno. To install the Arduino IDE, go here.
To make the internal LED on the Arduino blink, we first set up the hardware as is outlined in the schematic below. It is important to have the 300 Ohm resistor in series with the potentiometer this will prevent too much current from being sourced to the pins.
In order to test whether or not our connection with the board was working, we used the example sketch, Blink (File>Examples>01.Basics>Blink). After uploading the sketch to the board, the on-board LED toggled on and off once per second, verifying that our connection and board were working.
void setup() {
// initialize digital pin 13 as an output.
pinMode(13, OUTPUT);
}
void loop() {
digitalWrite(13, HIGH); // turn the LED on (HIGH is the voltage level)
delay(1000); // wait for a second
digitalWrite(13, LOW); // turn the LED off by making the voltage LOW
delay(1000); // wait for a second
}
To make an external LED blink (as opposed to the one built into the Arduino), we had to modify the schematic to add in the external LED as shown below:
We then modified the Blink sketch to blink an external LED through a digital output pin. Since digital output pins on the Uno will stop working if they are used to source too much current, we added a 330 ohm resistor in series with an LED.
void setup() {
// initialize digital pin 12 as an output.
pinMode(12, OUTPUT);
}
// the loop function runs over and over again forever
void loop() {
digitalWrite(12, HIGH); // turn the LED on (HIGH is the voltage level)
delay(1000); // wait for a second
digitalWrite(12, LOW); // turn the LED off by making the voltage LOW
delay(1000); // wait for a second
}
The next step was to test out using analog inputs. We used a potentiometer and a resistor to make a variable voltage input to the analog A0 pin on the board. To see the value of the input in real time we used the serial monitor, which allows us to print a value from the board to the screen. To do this we added this to the code:
Serial.println(voltage);
When we varied the position of the potentiometer, we were able to vary the output value printed to the serial monitor.
// Base code from https://www.arduino.cc/en/Tutorial/ReadAnalogVoltage
void setup() {
// initialize serial communication at 9600 bits per second:
Serial.begin(9600);
}
void loop() {
int sensorValue = analogRead(A0);
float voltage = sensorValue * (5.0 / 1023.0);
Serial.println(voltage);
The Arduino Uno does not have any analog output pins, but the digital output pins that can do pulse width modulation (PWM) are able to approximate the effect of an analog output. This is done using a digital signal with varying duty cycle (the percentage of a cycle for which the output of a signal is high, and if this happens fast enough, it acts as though the voltage is the time average value of the signal. So the high voltage multiplied by the duty cycle is the effective output voltage.
We then used this analog output to power an LED so that we could see the effect. Because the analog input values range from 0 to 1023, and the analog output values range from 0 to 255, we multiplied the value read from the input by 255/1023 when feeding that value to the analog output function. Here is our code:
void setup() {
// initialize serial communication at 9600 bits per second:
Serial.begin(9600);
pinMode(3,OUTPUT);
}
void loop() {
// read the input on analog pin 0:
int sensorValue = analogRead(A0);
// Convert the analog reading (which goes from 0 - 1023) to a voltage (0 - 5V):
float voltage = sensorValue * (5.0 / 1023.0);
analogWrite(3,sensorValue/4);
// print out the value you read:
Serial.println(voltage);
}
Using an oscilloscope, we checked the frequency of the signal and how it is affected by the potentiometer. This can be viewed in the following video:
To setup the servo, we again modified the LED setup, this time removing the LED component all together and replacing it with a servo motor and variable power supply which we held at 5V. The schematic is shown below:
We followed a method similar to mapping the potentiometer to the LED. Building off the code from the setup of the servos, we read the value of A0–our potentiometer value–and converted it to a value on a scale from 0-180, which we knew to be the range of the servos (from full reverse to full forward). We also printed the value to the Serial Monitor, just to ensure we were getting the correct values.
#include <Servo.h>
Servo pin;
void setup() {
// initialize serial communication at 9600 bits per second:
Serial.begin(9600);
//pinMode(3,OUTPUT);
pin.attach(9);
}
void loop() {
// read the input on analog pin 0:
int sensorValue = analogRead(A0);
pin.write(sensorValue/5.68);
// print out the value you read:
Serial.println(sensorValue*180.0/1023.0);
}
This code was a success, allowing us to set the servo’s speed and direction based on the potentiometer reading.
The assembly portion of the robot was made difficult due to the range of components available. Many screws would not fit through the chassis, or would interfere with the servo due to length. After a series of guessing-and-checking different parts, we mounted two servos, two wheels, a ball caster, and the Arduino onto a chassis. We wired the servos similarly to the previous portion of the lab.
This was more time-consuming than difficult, especially with a limited amount of Allen wrenches and only a certain amount of tasks that could be done at one time.
After assembling our simple robot, we wrote a simple program to attempt to run the robot in a square formation. We had to manually recalibrate the servos, as writing the value 90 to either one did not result in it stopping. We initialized both servos (named servoL and servoR for left and right, respectively):
#include <Servo.h>
Servo servoL;
Servo servoR;
void setup() {
Serial.begin(9600);
servoL.attach(9);
servoR.attach(10);
//pinMode(11, OUTPUT);
//pinMode(12, OUTPUT);
}
and wrote a forward() function as well as a right() function. Knowing that 0 would be “full reverse” and 180 was “full forward,” we initially set each servo to 180 for forward(). After attempting to run the program, we realized one of the servos was mounted backwards, with respect to the other, and then modified our code to write 0 to servoR.
void forward(int msec) {
servoL.write(180);
servoR.write(0);
//digitalWrite(11, HIGH);
//digitalWrite(12, HIGH);
delay(msec);
}
void right(int msec) {
servoL.write(180);
servoR.write(90);
//digitalWrite(11, HIGH);
//digitalWrite(12, LOW);
delay(msec);
}
We also added in two LEDs that flashed their respective colors to visually see whether the code was running “forward” or “right.” In reality, the robot went in a very questionable quadrilateral-like shape, as we did not have enough time to fine-tune the time values. Any code relating to the LEDs has been commented out due to an issue causing the servos to run incorrectly. If the LEDs are implemented in the future, we will debug this code.
void loop() {
forward(5000);
right(2000);
forward(5000);
right(2000);
forward(5000);
right(2000);
forward(5000);
}
Acoustic team: David Kim, Dylan Machado, Emmett Milliken
Optical team: Sofya Calvin, Amanda Trang
The goal of this lab was to get familiar with the microphone and the IR sensor that we would be adding to our robot. We did this by using the Open Music Labs Arduino FFT library to map the Fourier Transforms of the signals that the sensors were detecting. Additionally, we utilized op-amps to create filters and amplifiers for the audio and optical signals. In the future, these will be incorporated onto the robot to detect a 660Hz start signal as well as IR-emitting treasures throughout the map.
We initially split into two teams: Acoustic and Optical. The Acoustic team (David, Dylan, Emmett) focused on the microphone circuit, while the Optical team (Sofya, Amanda) worked with the IR sensor. Each team was to do the Fourier analysis corresponding to their circuit. As we did not finish during our lab hours, much of the lab was completed with varying members of the team during open lab.
For both the acoustic and optical parts of this lab, fast fourier transforms (FFT) were used to find the frequency content of the sampled input signal from the sensors.
The FFT library takes in an analog signal, samples it, and calculates the frequency content. The maximum frequency that the fft can accurately detect is half of the sampling frequency.
The FFT takes in 256 samples taken at equally spaced intervals, and outputs 256 values that represent the frequency content of the input signal. The 256 output values are the calculated frequency content within certain bins, or range of frequencies. The FFT of a real signal is symmetric over zero, only half of the outputs are unique. This is why sets of 128 values are output to serial.
In order to visualize the data coming out of the arduino, we copied sets of the data from the serial window into Excel. We initially spent a fair bit of time trying to get MATLAB to display the frequency content in real time, but ended up not being very successful.
Our Excel graphs of the frequency content of our signals can be found later on this page.
When initially setting up the microphone, we were somewhat confiused by the diagram given to us:
After reviewing the FFT results, we determined a bandpass filter would be ideal to distinguish the 660Hz tone from environmental noise. After a series of failed trials and many exasperating hours later (e.g. a incorrectly wired bandpass filter, two or three correctly wired filters with excessive gains, a broken microphone, a faulty Arduino pin, etc.), we ultimately settled for a low-pass filter and high-pass filter back-to-back.
For reference, here is a selection of a few graphs of our attempts at running the FFT through the failed bandpass filters.
The following circuits are the filters we designed based on the gain, passbands, and stopbands we wanted. Using Filter Wizard, we found the values for the components we would need.
Low-pass:
High-pass:
As the LM358 is a dual op-amp, we were able to include both circuits using the same board. The final circuit that we ended up using for the audio prortion of this lab is shown in the image below where the output of the microphone was put into the input of the low pass filter and the output of the low pass filter was then put into the high pass filter and finally the output of the high pass filter was connected to a capacitor and then the analog port of the arduino. The implementation was as follows:
Once we had succeeded in detecting individual frequencies, we forcused on getting a better idea of the actual range.
The Arduino Uno runs on a 16MHz clock. The ADC uses a prescaling value in order to slow down that clock to 16MHz/(prescaling factor). Changing the prescaling factor therefore lets us change the sampling frequency. Through looking at the documentation for the ATmega chip (found here), along with careful reading of the example code and Team Alpha’s website, we were able to figure out that we were using a prescaling factor of 32. This meant the clock for the ADC was running at 500kHz. The ATmega documentation says that an average ADC converstion takes 13 clock cycles to complete, so we would be sampling at around 38kHz. The size of each bin would then be around 150Hz. Our estimated bin size scale was about 146.666Hz, so they lined up fairly well.
The Capacitor was added right before the output signal was sent into the Arduino in order to cut the DC offset. As seen in the graphs above the one on the left was the circuit with the capacitor and the graph on the right was the circuit without a capacitor.
After building and connecting the filters to our circuit we were able to discover how to use the prescalers properly. The prescalers were preset to 32 from this line of code in the setup portion c ADCSRA = 0xe5; // set the adc to free running mode
and this line of code from the loop portion
ADCSRA = 0xf5; // restart adc
We were able to figure out that inorder to change the prescalers both the values needed to be changed and we changed the prescaler value to 128 by setting the ADCSRA like this: ```c ADCSRA = 0xe7; // set the adc to free running mode
ADCSRA = 0xf7; // restart adc ``` We also figured out that as the prescaler value increases, the resolution increases, but the total range of measurable frequencies decreases. Therefore the graph would look like it is shifting to the right as shown in the graphs above.
The capacitor in our circuit cuts the DC offset and prevents the signal from going down below 0V so the negative signals would be read as almost square waves and this causes the multiple peaks after the initial peak in the graphs above.
Note: Actual bin numbers are the ones shown in the graph - 1, because we forgot to take into account that while graphing on excel the data started indexing at 2. So (660Hz: Bin 18; 590Hz: Bin 16; 730Hz: Bin 19)
Once again we calculated the our actual sampling rate as we did before and found that we would be sampling at around 9.4kHz when we had our prescaler at 128. Therefore the size of each bin would then be around 36.87Hz. This calculation fits the bin numbers that the three different signals in the graph above fell into. Also the graph shows that we were able to distinguish the three signals apart from each other.
The goal of the Optical team was to detect a 7kHz IR beacon through the Arduino and perform a Fourier analysis on the signal. We first created a simple circuit (shown below) to detect the IR-emitting treasure.
The operating principle of the phototransistor is that it allows more current to pass with the more light it receives; and, similarly, less current to pass with less light. We were able to test the functionality of this simple circuit by using the oscilloscope. The output voltage would correctly lower when the phototransistor was covered (i.e. exposed to no light).
We set the treasures by attaching the output to the oscilloscope and adjusting the amplitude and frequency. This is indicated by channel 2 in blue in the image below. By holding the treasure against the phototransistor, we could see the effect in the output on the oscilloscope, as seen on channel 1 in yellow. The Fourier analysis will be discussed in the following section.
After initial testing, we determined that the signal received from the treasure was too weak at a realistic distance, and decided to implement an amplifier circuit. Using the LM358 op-amp and 1kΩ and 100Ω resistors, we created a simple non-inverting amplifier with a gain of approximately 10.
The actual implementation is as shown, with the treasure transmitting a signal at the top.
After determining the circuit could detect the IR signal successfully (before the amplifier was implemented), we ran the data through the FFT from the Open Music Labs library. We worked off the example code offered on their site. We used the preset ADC clock prescalar 32 which is shown in the following line:
ADCSRA = 0xe5; // set the adc to free running mode
For this part of the lab we used the prescaler value of 32 because we were able to determine that if we used the value of 128 the signal at 17kHz would most likely be cutoff.
We printed the output of the FFT to the Serial monitor and were then able to copy the data into Excel for visualization. In this set of data, we have two sets of data for each frequency the treasure was set to.
Without modifying the code, we continued to collect data with the amplifier implementation. We can distinctly see the difference between the normal and the amplified signals, as there is a significantly higher amount of frequencies within the desired bins. We will likely use a similar implementation on the robot in the future.
Our measurements showed us that the 7kHz signal was in bin 48, the 12 kHz signal was in bin 81 and the 17kHz signal was in bin 114. Our bin size was calculated to be about 150 Hz([(16 MHz / 32 prescalar) / 13 clock cylces] / 256 bins.). According to the calculated bin size and the bins that the signals seem to be placed in, in the graph shown below, the graph seems to be accurately detecting the treasures at each different signals.
The FFT code in its entirety can be viewed below: (We used the example code that was provided to us to do this lab. The only part of the code that we modified was setting the prescaler values as mentioned in the seperate parts of the audio and optical parts).
/*
fft_adc_serial.pde
guest openmusiclabs.com 7.7.14
example sketch for testing the fft library.
it takes in data on ADC0 (Analog0) and processes them
with the fft. the data is sent out over the serial
port at 115.2kb.
*/
#define LOG_OUT 1 // use the log output function
#define FFT_N 256 // set to 256 point fft
#include <FFT.h> // include the library
void setup() {
Serial.begin(9600); // use the serial port
TIMSK0 = 0; // turn off timer0 for lower jitter
ADCSRA = 0xe5; // set the adc to free running mode
ADMUX = 0x40; // use adc0
DIDR0 = 0x01; // turn off the digital input for adc0
}
void loop() {
while(1) { // reduces jitter
cli(); // UDRE interrupt slows this way down on arduino1.0
for (int i = 0 ; i < 512 ; i += 2) { // save 256 samples
while(!(ADCSRA & 0x10)); // wait for adc to be ready
ADCSRA = 0xf5; // restart adc
byte m = ADCL; // fetch adc data
byte j = ADCH;
int k = (j << 8) | m; // form into an int
k -= 0x0200; // form into a signed int
k <<= 6; // form into a 16b signed int
fft_input[i] = k; // put real data into even bins
fft_input[i+1] = 0; // set odd bins to 0
}
fft_window(); // window the data for better frequency response
fft_reorder(); // reorder the data before doing the fft
fft_run(); // process the data in the fft
fft_mag_log(); // take the output of the fft
sei();
Serial.println("start");
for (byte i = 0 ; i < FFT_N/2 ; i++) {
Serial.println(fft_log_out[i]); // send out the data
}
}
}
Graphics team: Sofya Calvin, Amanda Trang, Dylan Machado
Audio team: Emmett Milliken, David Kim
This lab has two main goals: one, to take external inputs from the Arduino to the FPGA and display them on a screen through VGA; and two, to generate a three-tone sound to a speaker via the 8-bit DAC. The graphics portion is intended to be a stepping stone toward a final goal of mapping the maze on-screen during competition, while the audio portion will eventually signal the completion of the maze.
Before beginning our lab, we had to prepare our VGA connector with the proper resistors to display on our screen. The connector contains 8 inputs: 3 corresponding to red, 3 to green, and 2 to blue. The VGA would determine how much of each color is being displayed based on the voltages of the corresponding bits, up to 1 volt. For example, if the display were to show all red, then the 3 bits corresponding to red would add up to 1 volt. Furthermore, we made our voltage values for each bit to be half of the next bit’s. This meant that, in red for example, the smallest bit was valued at 0.143 volts, the next bit was valued at 0.286 volts, and the largest bit was 0.571 volts. These added together would add up to 1 volt, and allowed for varying amounts of color in between.
To achieve this on our VGA, we used a voltage divider. Below is an example of the voltage divider used for the bits corresponding to red. Bits 7, 6, and 5 all modify red on the VGA connector.
Bit 7 represents where our largest voltage can come from. Bit 6 can then send half the voltage of bit 7 when activated, and bit 5 therefore sends the lowest voltage value when sending a signal. There is also an internal resistor of 50 ohms built into ground that we had to account for. It is pictured in the schematic above. From this, we then calculated the resistor values needed to build this setup on our VGA connector. By using the known voltages, and assuming that for any given resistor, when the voltage was high, the voltage for the other two were low, we were able to create three equations, with three variables to describe each of the voltage dividers. Since the red and green both had three bit inputs, they both used the same calculation while the blue calculations followed a similar process only with two variables and two equations. In the end, the calculations revealed approximate resistor values of 201.25, 402.4, and 805 Ohms for the red and green and 172.5 and 345 Ohms for the blue. We then picked resistor values as close to this as we could find and soldered the appropriate resistors onto the connector. We were then ready to display images on our screen!
To change the color of the screen, we first designated in our code the pixel color we wanted.
assign PIXEL_COLOR = 8'b000_111_00; // Green
The program then looped through each pixel and changed all them to that one designated color. Since they were all one color, there was no need to create an array to keep track of each pixel. They were all the same.
To follow this up, we drew a single box, defined through ternary operators with the pixel value as the condition. In words, the follow code tells the pixels between 50-150 in the X and Y coordinates to be white; otherwise (i.e. the rest of the screen) should be red.
assign PIXEL_COLOR = (PIXEL_COORD_X > 50 && PIXEL_COORD_X < 150 && PIXEL_COORD_Y > 50 && PIXEL_COORD_Y < 150) ? 8'b111_111_11 : 8'b111_000_00;
The next goal was to split the pixels up to display multiple colors on the screen. We knew defining each pixel would be inefficient and wasteful, and instead wanted to break the screen into specifically defined squares. To do this, we split up our box into groups via a series of case statements.
always @ (posedge CLOCK_50) begin
case(PIXEL_COORD_Y / 120)
4'd0 : // row A
case(PIXEL_COORD_X / 120)
4'd0 : PIXEL_COLOR =
8'b111_000_00;
4'd1 : PIXEL_COLOR = 8'b111_001_00;
4'd2 : PIXEL_COLOR = 8'b111_010_00;
4'd3 : PIXEL_COLOR = 8'b111_100_00;
4'd4 : PIXEL_COLOR = 8'b111_110_00;
4'd5 : PIXEL_COLOR = 8'b111_111_01;
default: PIXEL_COLOR = 8'b111_111_11;
endcase
...
end
With the case statements, we first divided our set of pixels into rows, from row A to row D. From there, we looked at the the remaining X coordinate values of the pixels and divided them into further columns. This gave us boxes that could each contain a unique color that we designated. The result was a colorful grid on our screen, seen below.
With an objective of taking in at least two inputs from the Arduino, we took a simple route of outputting toggling digital signals from the Arduino on loop. Outputting to digital pins 12 and 13, we alternated between sending signals (0,0), (0,1), (1,0), and (1,1) with 1.5 second intervals. This would create the desired four states.
void loop() {
// put your main code here, to run repeatedly:
digitalWrite(pin1, LOW);
digitalWrite(pin2, LOW);
delay(1500);
digitalWrite(pin1, LOW);
digitalWrite(pin2, HIGH);
delay(1500);
digitalWrite(pin1, HIGH);
digitalWrite(pin2, LOW);
delay(1500);
digitalWrite(pin1, HIGH);
digitalWrite(pin2, HIGH);
delay(1500);
}
We also knew the Arduino runs on a 5V scale, whereas the FPGA uses 3.3V. We designed a simple voltage divider to pull down the voltage as follows:
Where Z1 (R1) was 240Ω, ad Z2 (R2) was 470Ω. These values were calculated using Ohms Law Calculator. We were not aware the Arduino had a built-in 3.3V pin, because the text had rubbed off. We connected the pins from the Arduino to this circuit, and the output of the voltage divider to the FPGA. The system looked as follows:
We wanted to ensure our signal was toggling as desired, so we hooked it up to the oscilloscope to view the signals from each pin. The oscilloscope showed us that it was toggling between 0 and 3.3V as expected:
In order to check if our signals were being read correctly, we wrote the LEDs on the FPGA to toggle in accordance with the two signals (i.e. LED1 turned on when switch_1 (from the Arduino) went high, and the same thing for LED2 and switch_2). The debugging process of this is described later in this report. The following is a quick clip of what the LEDs looked like with the toggling signal:
Wanting to use these signals to change the colors on-screen, we returned to Quartus to modify the existing colored grid code. Out of simplicity, we modified the first square on the first two rows–turning them white when its respective signal went high.
always @ (posedge CLOCK_50) begin
case(PIXEL_COORD_Y / 120)
4'd0 : // row A
case(PIXEL_COORD_X / 120)
4'd0 : PIXEL_COLOR = (switch_1) ? 8'b111_111_11: 8'b111_000_00;
...
4'd1 : // row B
case(PIXEL_COORD_X / 120)
4'd0 : PIXEL_COLOR = (switch_2) ? 8'b111_111_11: 8'b111_111_00;
This gave us four different states: neither square being white (0,0), one white with one colored (0,1) or (1,0), and both white (1,1). Here is a video of the toggling squares:
We additionally wanted to have four different squares change colors, for the clear distinction of the four different states. We changed the square A1 (top left) to turn white on (0,0), B1 white on (0,1), C1 white on (1,0), and D1 (bottom left) on (1,1).
4'd0 : // row A
case(PIXEL_COORD_X / 120)
4'd0 : PIXEL_COLOR = (~switch_1 && ~switch_2) ? 8'b111_111_11: 8'b111_000_00;
...
4'd1 : // row B
case(PIXEL_COORD_X / 120)
4'd0 : PIXEL_COLOR = (~switch_1 && switch_2) ? 8'b111_111_11: 8'b111_111_00;
...
4'd2 : // row C
case(PIXEL_COORD_X / 120)
4'd0 : PIXEL_COLOR = (switch_1 && ~switch_2) ? 8'b111_111_11: 8'b111_111_00;
...
4'd3 : // row D
case(PIXEL_COORD_X / 120)
4'd0 : PIXEL_COLOR = (switch_1 && switch_2) ? 8'b111_111_11: 8'b111_000_11;
Debugging:
We had a few issues with using the correct pins on the FPGA. We first didn’t distinguish between GPIO_0 and GPIO_1, and then did not know the correct orientation of the pinout (i.e. where pin 1 was), then could not interface with the pin itself.
From the oscilloscope check, we were confident the signal was toggling as expected. Then, we tried debugging using the LEDs on the FPGA. After a series of seeing the LEDs constantly high for some arbitrary reason, we realized we were not, in fact, reading from GPIO_1 pins 15 and 17, but rather 5 and 7–the declaration of the “_1" in “GPIO_15” was confusing.
After this, we switched to outputting on the screen. Our logic had been correct to change boxes on-screen.
The DAC was connected to the FPGA GPIO 1 pins for this part of the lab. We had the output from the sine wave output from the even numbered pins of GPIO_1 pins 8 through 22. Then the output of the DAC was connected to the speakers as shown in the pictures below.
To make sure that our connection was working properly we first wrote out the code that outputed a 660hz sine wave. The code that was used was similiar to the 440hz square wave code. However we created a new module in the project in verilog called SINE_ROM that would read in the sine values from a text file that we generated and store the values as a ROM. Then we created an instance of the module in the main file and connected the inputs and outputs the way we wanted it to be.
SINE_ROM sine (
.addr(address),
.clk(CLOCK_25),
.q({GPIO_1_D[8],GPIO_1_D[10],GPIO_1_D[12],GPIO_1_D[14],GPIO_1_D[16],GPIO_1_D[18],GPIO_1_D[20],GPIO_1_D[22]})
);
Then we had a varable that represented the time, in clock cycles, it would wait before the next address access in the sine table.
//660hz sine wave
localparam CLKDIVIDER_660 = 25000000/660/256;
The algorithm that we used was having a counter decrementing to 0, and when the time reached 0 the program would read the next value in the sine table. And the counter would be reset to the value in the local variable CLKDIVIDER_660. We had the counter decrementing every posedge of the CLK.
/* 660 hz sine wave */
always @ (posedge CLOCK_25) begin
if (counter == 0) begin
counter <= CLKDIVIDER_660 - 1;
if (address == 255) begin
address <= 0;
end
else begin
address <= address + 1;
end
end
else begin
counter <= counter - 1;
end
end
We first used the template code that was provided to us and followed the example that team alpha had on their website. The following is the code that we added to the DE0_NANO template.
//time for 440hz square wave
localparam CLKDIVIDER_440 = 25000000/440/2;
...
// Sound variables
reg square_440; // 440 Hz square wave
assign GPIO_0_D[2] = square_440;
...
//Sound state machine (440hz square wave)
always @ (posedge CLOCK_25) begin
if (counter == 0) begin
counter <= CLKDIVIDER_440 - 1; // reset clock
square_440 <= ~square_440; // toggle the square pulse
end
else begin
counter <= counter - 1;
square_440 <= square_440;
end
end
after we ran the code on the DE0-NANO we connected the output to the oscilloscope and got the output that we were expecting.
For outputting three distinct tones, we took 2 different approaches. At first we thought that we were supposed to output 3 seperate frequencies at the same time. So we started working on that approach. However, it proved difficult because there was a problem with assigning the outputs of 3 different modules to the same pins that we used for the output that was set up (GPIO1 even pins 8 - 22). Then we were told that the task was to output 3 different frequencies, one at a time. So we fixed up our code to do this, with came out to be a lot simpler.
First we found some different frequencies that we wanted to output.
//notes in Dm11 chord
localparam CLKDIVIDER_D = 25000000/294/256;
localparam CLKDIVIDER_F = 25000000/349/256;
localparam CLKDIVIDER_A = 25000000/440/256;
localparam CLKDIVIDER_C = 25000000/523/256;
localparam CLKDIVIDER_E = 25000000/660/256;
localparam CLKDIVIDER_G = 25000000/784/256;
Next we added 2 more variables one called duration to make sure that each frequency plays for one sec at a time and the other one called note to keep track of which of the three tones were currently playing.
Then we coded the whole program similiarly to the 660 hz sine output. However we had more conditions that were checking this time. One additional condition checked that the if the duration decremented down to 0 it would reset it to the value that is ONE_SEC and also it would change the tone that was currently playing. The second additional condition that we added was to check which note was currently playing and which note to change to.
/* 3 distinct tones played for 1 sec at a time*/
always @ (posedge CLOCK_25) begin
if (duration == 0) begin
duration <= ONE_SEC;
if (note == 0) begin
count <= CLKDIVIDER_C - 16'b1;
note <= 2;
end
else if (note == 1) begin
count <= CLKDIVIDER_G - 16'b1;
note <= note - 2'b1;
end
else if (note == 2) begin
count <= CLKDIVIDER_E - 16'b1;
note <= note - 2'b1;
end
end
else begin
if (counter == 0) begin
counter <= count - 16'b1;
if (address == 255) begin
address <= 8'b0;
end
else begin
address <= address + 8'b1;
end
end
else begin
counter <= counter - 16'b1;
end
duration <= duration - 25'b1;
end
end
After we finished this part of the lab we went back to our first approach where we would output 3 tones at the same time because it seemed interesting. With the help of the TA we figured out that we needed three seperate modules (one for each frequency) and we would have to have 3 temporary output reg for the 3 modules.
//multifrequency output
SINE_ROM sine1 (
.addr(address1),
.clk(CLOCK_25),
.q(out1)
);
SINE_ROM sine2 (
.addr(address2),
.clk(CLOCK_25),
.q(out2)
);
SINE_ROM sine3 (
.addr(address3),
.clk(CLOCK_25),
.q(out3)
)
Then we would add up the 3 outputs of the seperate modules and divide it by 3 to take care of possible very high amplitude from the sum of the three outputs and then output it to the FPGA board.
The setup of the output was a little tricky at first but we figured out that we needed to assign the desired output pins:
//for multifrequency output
reg[7:0] final;
assign {GPIO_1_D[8],GPIO_1_D[10],GPIO_1_D[12],GPIO_1_D[14],GPIO_1_D[16],GPIO_1_D[18],GPIO_1_D[20],GPIO_1_D[22]} = final;
Then we had 3 seperate always blocks that were for each of the 3 tones we wanted. In these always blocks we had the code for going through the sine table. The only thing that would be different would be the desired tone variable.
always @ (posedge CLOCK_25) begin
if (counter1 == 0) begin
counter1 <= CLKDIVIDER_E - 1;
if (address1 == 255) begin
address1 <= 0;
end
else begin
address1 <= address1 + 1;
end
end
else begin
counter1 <= counter1 - 1;
end
end
Then this last always block would combine the outputs of the three seperate tones
always @ (posedge CLOCK_25) begin
if (counter == 0) begin
counter <= CLKDIVIDER_G - 16'b1;
final <= ((out1 + out2 + out3 )/3);
end
else begin
counter <= counter - 16'b1;
end
end
Although, the outputting sound was not very clean this seemed to work to a certain extent. If we have more time we will try to output a more pleasent and cleaner sounding chord.
Radio team: Sofya Calvin, Amanda Trang, Dylan Machado
FPGA team: Emmett Milliken, David Kim
The objective of this lab was to implement radio communication between two Arduinos–which will later be implemented as radio communication between the robot and our base station. An additional component was integrating the work from Lab 3 to display updated maze data through VGA, requiring the states of the visited areas to be stored.
The majority of the wireless communication was implemented through the template code provided in GettingStarted and the RF24 library. We first calculated the identifier numbers for our pipes using the 2(3D + N) + X formula provided. As Day 0 and and Team 3, our identifier values came out to be 6 and 7.
const uint64_t pipes[2] = { 0x0000000006LL, 0x0000000007LL };
The message (by default, set to a timestamp) is put into radio.write() in order to send it to the other, receiving radio. This transmitter then waits for a response (i.e. acknowledgement) that the data had been received correctly. Additionally, the ACK bit is already implemented. To receive the data, while it is not “done,” radio.read() receives the data, which can be printed to the serial monitor.
We connected the two radios to the two Arduinos. Putting this program on both Arduinos and setting one to T(ransmit) and the other to R(eceive), we were able to view the timestamps of the messages on both serial monitors. Furthermore, we found through physical testing that the wireless communication had a range of around 10 feet. This will be important to know later on when we implement wireless communication on Brooklynn.
Sending the whole maze wirelessly was a fairly minor addition to the GettingStarted.ino template code. We started by defining an arbitrary 2D maze array and sent the maze in a single payload:
unsigned char maze[4][5] =
{
0, 0, 1, 2, 3,
2, 2, 0, 1, 2,
1, 1, 3, 2, 2,
1, 1, 2, 0, 1,
};
// Send the maze
printf("Now sending the maze...\n");
bool ok = radio.write(maze, sizeof(maze));
if (ok)
printf("ok...");
else
printf("failed.\n\r");
// Now, continue listening
radio.startListening();
Again, radio.write() does the heavy lifting of it, sending the data and assigning ok to true or false, printing the response accordingly (instsead of sending and receiving the timestamp). On the receiving end of things, we radio.read() the received data (called got_maze), printing it to the serial monitor by interating through the entire 2D array. The serial monitor looked as follows:
unsigned char got_maze[5][5];
bool done = false;
while (!done)
{
// Fetch the payload.
done = radio.read( got_maze, sizeof(got_maze) );
// Print the maze
for (int i=0; i < 5; i++) {
for (int j=0; j < 5; j++) {
printf("%d ", got_maze[i][j]);
}
printf("\n");
}
// Delay just a little bit to let the other unit
// make the transition to receiver
delay(20);
Sending the whole maze on each loop was evidently not the most efficient way to do it–particularly in relation to power consumption, if this were a larger-scale project. Instead, we chose to send only new, changing data (i.e. new position, discovered treasure, etc.). By initializing x and y coordinate variables as well as a “state” variable (called x_coord, y_coord and pos_data respectively), we will be able to simple increment the desired values to display the robot’s position.
0 | 1 | 2 | 3 | 4 | |
---|---|---|---|---|---|
0 | 000 00 xx | 001 00 xx | 010 00 xx | 100 001 xx | 101 001 xx |
1 | 000 01 xx | 001 01 xx | 010 01 xx | 100 010 xx | 101 010 xx |
2 | 000 10 xx | 001 10 xx | 010 10 xx | 100 011 xx | 101 011 xx |
3 | 000 11 xx | 001 11 xx | 010 11 xx | 100 100 xx | 101 100 xx |
This table is the binary representation of our 4x5 grid.
To send the new data, we created a new variable (called new_data) as the packet to send to the base station. This is a 7-bit piece of information, in which the first three bits display the x position, the next two are y position, and the last two are state data. This was chosen as the grid is 5 wide, requiring 3 bits to describe each, but only 4 tall–requiring 2 bits. For testing purposes, we arbitrarily made 4 states, which also required 2 bits. In order to display the data this way, we shift the x and y position data to fit next to the data. The debugging for the packet data is described later in this report.
new_data = x_coord << 4 | y_coord << 2 | pos_data;
// x x x | y y | d d
To actually send this data, we follow a similar process in sending the whole maze. We send the new data through radio.write() again.
printf("Now sending new map data\n");
bool ok = radio.write( &new_data, sizeof(unsigned char) );
if (ok)
printf("ok... \n");
else
printf("failed.\n\r");
// Now, continue listening
radio.startListening();
The sending side of the Serial Monitor looks as follows:
To verify the data being transmitted and received, we simply read the data back and parse the string back into bits–which may not be the most efficient way to do it, but made debugging much simpler by seeing the binary represenation of the packet instead of binary.
String got_string = String(bitRead(got_data_t, 6)) + String(bitRead(got_data_t, 5)) + String(bitRead(got_data_t, 4)) + " " + String(bitRead(got_data_t, 3)) + String(bitRead(got_data_t, 2)) + " " + String(bitRead(got_data_t, 1)) + String(bitRead(got_data_t, 0));
// Spew it
Serial.println("Got response " + got_string);
On the receiving end, we similarly declare the variable of the data received (got_data) and verify if it was receieved. Using radio.read() we are able to take that data, print the result to the serial monitor. The Serial Monitor for this is as follows:
unsigned char got_data;
bool done = false;
while (!done) {
// Fetch the payload, and see if this was the last one.
done = radio.read( &got_data, sizeof(unsigned char) );
// Spew it
// Print the received data in binary
String bin_string = String(bitRead(got_data, 6)) + String(bitRead(got_data, 5)) + String(bitRead(got_data, 4)) + " " + String(bitRead(got_data, 3)) + String(bitRead(got_data, 2)) + " " + String(bitRead(got_data, 1)) + String(bitRead(got_data, 0));
printf("Got payload... ");
Serial.println(bin_string);
// Delay just a little bit to let the other unit
// make the transition to receiver
delay(20);
For now, we are simulating exploration by methodically incrementing the data to travel the entire grid. The data is then sent to the FPGA.
Debugging:
We ran into some issues using the digital pins on the Arduino. We mistakenly attempted to utilize digital pins 0 and 1 as GPIO–however, they are TX and RX, meant for serial communication. The data transmission between the Arduino and FPGA became questionable due to our parallel implemenation. Without pins 0 and 1, we were forced to use a 7-bit packet (from pins 2-8).
As in lab 3, we split the screen into rows and then columns using nested case statements. These nested case statements create the necessary 4x5 grid and allow us to set the pixel color within each square. Previously the color had been hard coded, but now is determined by data sent from the Ardiuno. In order to store the incoming data, we created a 4x5 array of 2-bit values. This array is updated every time the FPGA recieves information from the Arduino.
For lab 4 we decided to use parallel communication between the arduino and the FPGA. We used parallel communication because we decided to send only a 7 bit number for the radio communication part. So the arduino would recieve the 7 bit number and output each of the 7 bits in a different digital pin. We then put the digital out through a voltage divider because the FPGA GPIO pins can only handle 3.3 V. The circuit was connected as shown in the picture below.
We created a radio read module in verilog which simply mapped the first 3 bits as the column number and the next 2 bits as the row number and the last 2 bits as the state information. Then we used the code below to update our state machine depending on the signal that was being recieved.
for(i = 0; i < 5; i = i+1) begin
for (j = 0; j < 4; j = j+1) begin
if ((i+5*j) <= grid_counter) maze_state[i][j] <= radio_value;
else begin
if (radio_value == 0) begin
maze_state[i][j] <= 3;
end
else begin
maze_state[i][j] <= radio_value - 1;
end
end
end
end
Then we displayed the map as we did in Lab 3 using the case statements and the VGA driver.
For Milestone one, our team was challenged to create a robot which had the ability to follow a black line using sensors, as well as complete a figure 8 motion when placed on a grid. This required an in depth analysis of mechanics, hardware, and software.
For the first part of the milestone, we had to figure out how to make Brooklynn follow a line. We determined that the best way to approach this was to use sensors to read and track the black line as she followed it. This meant we were faced with two challenges: the placement and usage of sensors to detect the line, and the act of remaining on the line and following it as she moved.
The first challenge in designing Brooklynn from scratch was to make her mobile. We decided that we would start off using the servos as motors and eventually, if we see it to be a problem, will consider swapping these out for more precise and controllable motors. To build Brooklynn was fairly straight forward: we attached the servos to a set of wheels, which were then secured to a plastic base with a third leg for support. On top of the base, we attached an Arduino Uno and circuit board for all of the wiring and programming. Finally, we attached two light sensors on the front of Brooklynn to help with guiding her.
To make Brooklynn follow a line, we relied on the values reported on the light sensors in (almost) real time. When the value read less than ~900, this indicated the sensor was over white area. When the value read greater than ~950, this indicated the sensor was over black area. At first we left a little space between the two sensors to give Brooklynn a wide range of “vision”. However, after running her in a trial round, we realized that she was correcting her motion too much and wiggling around the line instead of following it directly. To fix this, we moved the sensors closer together so that once she corrected her motion, she would stay on the line and not continue to wiggle.
As far as power sources were concerned, we used a typical phone charger to power the servos and a regular 9V battery to power the Arduino. From the Arduino, we used the 5V output as a power source for the light sensors. The remaining wiring involved connecting the hardware to ground, as well as connecting the servos to Arduino outputs and the servos to Arduino inputs.
To make Brooklynn follow a line, we coded a way for her to utilize the two center sensors in front. First, we obtained data from the sensors to determine the values of white and black. From there, we created an algorithm represented by this pseudocode:
With this, if one sensor went off of the line, Brooklynn would shift and correct herself to have both of her middle sensors over the line again.
The full code can be found at the bottom of this page.
The video above shows Brooklynn in action. She correctly follows a line, and corrects herself as she moves. However, we noticed that she had issues with remaining in a straight line, and felt that the turns were too jolting and unnecessarily slowed her down. It turned out that we initially failed to have a common ground for our servos, which resulted in them moving at different speeds. We also decided to have Brooklynn use only one wheel at a time when adjusting so that she was always moving forward on a line, instead of pivoting to adjust. The final iteration can be seen below.
With the first part done, we were ready to move on to the next step.
For the figure 8, we faced several more challenges. Brooklynn needed to follow a line, but she also needed to turn at and cross specific junctions. To do this, we needed her to determine where these junctions were, and how to act at these junctions.
To modify Brooklynn to move in a figure 8 motion, we decided to add two more sensors to her. The initial two that we had placed on her functioned primarily to keep her following the line. The additional two sensors we added were used solely to detect the intersections. To do this, we made sure to place them towards the sides of Brooklynn so that they would not detect the line that she was following, but would detect the intersections. This was crutial in developing the code to perform this task.
The final hardware wiring for Milestone 1 is depicted by the following schematic:
The first task in coding our figure 8 program was implementing our line following algorithm. After that, we needed to tell Brooklynn when she reached a junction, and what to do. Through the outer sensors, we were able to tell her when a “new action” was to be taken. She would then follow a loop of commands to determine whether that action was a turn or driving through an intersection.
For our turns, we first tried using an algorithm similar to this:
However, this proved to be unreliable. Brooklynn would often begin turning and not complete the turn, or not turn at all. To fix this issue, we changed our code to implement a delay before the sensors would read new values. As a result, Brooklynn was able to respond much more reliably at intersections.
The entire code can be found at the bottom of the page
The video above shows our working, but unoptimized figure 8 build. Eventually, we changed our turns to use both wheels (the outer wheel moves faster than the inner wheel to keep the turn sharp but not on a pivot) as well as moving our outer junction-detecting sensors back to compensate for quicker, sharper turns. Below is a video of our final implementation.
#include <Servo.h>
Servo leftservo;
Servo rightservo;
void setup() {
// initialize serial communication at 9600 bits per second:
Serial.begin(9600);
pinMode(9,OUTPUT);
pinMode(10,OUTPUT);
leftservo.attach(9);
rightservo.attach(10);
}
// the loop routine runs over and over again forever:
void loop() {
// read the input on analog pin 0:
int sen1 = analogRead(A0);
int sen2 = analogRead(A1);
/*Serial.print(sen1); //
Serial.print(F(" "));
Serial.println(sen2);*/
if (abs(sen1-sen2)<75){
leftservo.write(180);
rightservo.write(0);
}
else if (sen1>sen2){ //tilted to the right; right sensor senses white
leftservo.write(90);
rightservo.write(0);
}
else if (sen2>sen1){ //tilted to the left; left sensor senses white
leftservo.write(180);
rightservo.write(90);
}
}
### Figure 8 Code
#include <Servo.h>
int rightTurn;
int leftTurn;
int inLeft;
int inRight;
int outLeft;
int outRight;
int stepCounter;
Servo leftservo;
Servo rightservo;
void setup() {
// initialize serial communication at 9600 bits per second:
Serial.begin(9600);
leftservo.attach(9);
rightservo.attach(10);
rightTurn = 0;
leftTurn = 0;
stepCounter = 1;
}
// the loop routine runs over and over again forever:
void loop() {
// read the input on analog pin 0:
//light sensors
//(>950 : black line ; <900 : white space)
readSensor();
// --------------
// LINE FOLLOWING start
// --------------
while (outLeft < 900 || outRight < 900){ //while both outer sensors see white
readSensor();
if (abs(inLeft-inRight)<70){ //if inner are similar
forward();
}
else if (inLeft>inRight){ //if tilted left, correct
leftservo.write(90);
rightservo.write(0);
}
else if (inRight>inLeft){ //if tilted right
leftservo.write(180);
rightservo.write(90);
}
readSensor();
}
stop();
//-------------
//end line follow section
//-------------
//--------------
//steps when intersection is encountered section, start
//--------------
switch(stepCounter){
case 1://for first 3 detected intersections, turn right
case 2:
case 3:
right();
delay(500);
while(outLeft > 900 || outRight > 900) { //while out sees black, in sees white
right();
}
while(inLeft < 900 && inRight < 900) { //while out sees black, in sees white
right();
}
stepCounter++;
break;
case 4: //go straight after 3 right turns
forward();
delay(500);
stepCounter++;
break;
case 5://turn left at next 3 intersections
case 6:
case 7:
left();
delay(500);
while(outLeft > 900 || outRight > 900) { //while out sees black, in sees white
left();
}
while(inLeft < 900 && inRight < 900) { //while out sees black, in sees white
left();
}
stepCounter++;
break;
case 8://go straight after 3 left turns
forward();
delay(500);
stepCounter=1;
break;
default:
break;
}
}
void readSensor(){
inLeft = analogRead(A0);
inRight = analogRead(A1);
outLeft = analogRead(A2);
outRight = analogRead(A3);
}
//turning functions
void forward() {
leftservo.write(180);
rightservo.write(0);
readSensor();
}
void right() {
leftservo.write(180);
rightservo.write(95);
readSensor();
}
void left() {
leftservo.write(85);
rightservo.write(0);
readSensor();
}
The primary goal of Milestone 2 is to be able to distinguish between three treasure frequencies–7kHz, 12kHz, and 17kHz–using our FFT with the IR phototransistor. Additionally, we implemented a short-range wall sensor in order to autonomously detect walls.
Most of the work for this lab had been completed in Lab 2, in which we mapped the Fourier transforms of the IR signal detected. To complete this milestone, we added LEDs to our preexisting IR detection circuit (with amplification) to individually light up upon the correct signal of a treasure.
The circuit with the LEDs can be viewed below:
We referenced our amplified treasure detection data from the previous lab:
This data shows that bins 48, 81, and 114 correspond to the 7kHz, 12kHz, and 17kHz treasures, respectively. On average, the frequency of these bins is between 140-160–to be safe, we set our threshold of detection at 130. We added a simple conditional to check the value of the desired bins as follows:
...
if (fft_log_out[48] > 130){ //7kHz, white
digitalWrite(9, HIGH);
}
else if (fft_log_out[81] > 130){ //12kHz, green
digitalWrite(10, HIGH);
}
else if (fft_log_out[114] > 130) { //17kHz, red
digitalWrite(11, HIGH);
}
else { //turn all off
digitalWrite(9, LOW);
digitalWrite(10, LOW);
digitalWrite(11, LOW);
}
We set up digital pin 9 to be a white LED, 10 to green, and 11 to red. These correspond to 7, 12, and 17kHz. When the treasures are presented in front of the IR phototransistor, the idea was that the corresponding LED would light up. Our test was successful, and can be seen in the following video:
The full code for this section is pasted below:
/*
fft_adc_serial.pde
guest openmusiclabs.com 7.7.14
example sketch for testing the fft library.
it takes in data on ADC0 (Analog0) and processes them
with the fft. the data is sent out over the serial
port at 115.2kb.
*/
#define LOG_OUT 1 // use the log output function
#define FFT_N 256 // set to 256 point fft
#include <FFT.h> // include the library
void setup() {
pinMode(9, OUTPUT);
pinMode(10, OUTPUT);
pinMode(11, OUTPUT);
pinMode(12, OUTPUT);
Serial.begin(9600); // use the serial port
TIMSK0 = 0; // turn off timer0 for lower jitter
ADCSRA = 0xe5; // set the adc to free running mode
ADMUX = 0x40; // use adc0
DIDR0 = 0x01; // turn off the digital input for adc0
}
void loop() {
while(1) { // reduces jitter
cli(); // UDRE interrupt slows this way down on arduino1.0
for (int i = 0 ; i < 512 ; i += 2) { // save 256 samples
while(!(ADCSRA & 0x10)); // wait for adc to be ready
ADCSRA = 0xf5; // restart adc
byte m = ADCL; // fetch adc data
byte j = ADCH;
int k = (j << 8) | m; // form into an int
k -= 0x0200; // form into a signed int
k <<= 6; // form into a 16b signed int
fft_input[i] = k; // put real data into even bins
fft_input[i+1] = 0; // set odd bins to 0
}
fft_window(); // window the data for better frequency response
fft_reorder(); // reorder the data before doing the fft
fft_run(); // process the data in the fft
fft_mag_log(); // take the output of the fft
sei();
Serial.println("start");
for (byte i = 0 ; i < FFT_N/2 ; i++) {
Serial.println(fft_log_out[i]); // send out the data
}
if (fft_log_out[48] > 130){ //7kHz, white
digitalWrite(9, HIGH);
}
else if (fft_log_out[81] > 130){ //12kHz, green
digitalWrite(10, HIGH);
}
else if (fft_log_out[114] > 130) { //17kHz, red
digitalWrite(11, HIGH);
}
else { //turn all off
digitalWrite(9, LOW);
digitalWrite(10, LOW);
digitalWrite(11, LOW);
}
}
}
In the future, we will have two IR detectors (one on each side) and mount them on Brooklynn. We will likely maintain the simple LED setup to flag when a treasure is detected.
The other goal of this milestone was to get Brooklynn to detect a wall autonomously. To do this, we attached a short range IR sensor to the front of Brooklynn. The sensor was then attached to our Arduino via an analog pin for us to calibrate its sensor values. From there, we used snippets of our previous line detection and figure 8 code from Milestone 1 in addition to new wall direction code to make the robot turn left when a wall was detected in front of the robot at the intersection, since walls in front of Brooklynn will only occur at intersections. Our resulting code looked like the pseudocode below:
**line following code**
if (at an intersection and a wall is detected in front)
turn left
The full code can be found at the bottom of this page.
In addition, a video of Brooklynn autonomously detecting walls and turning is below:
In the future, we will use two more short range IR sensors, one on each side of Brooklynn, to detect walls on her respective sides. This will help in mapping out the maze, as well as allow Brooklynn to determine which direction she needs to turn when at an intersection. These sensors have been mounted and correctly detect walls like the front sensor, but are not in use at the moment.
At the point of this report, Brooklynn is being dismantled in order to have the wiring reorganized and more accessible for future use. Here is the current status:
The idea is to restructure her (for now) so that the arduino sits on the lower base plate while the entire upper base plate is used as an extended circuit board. We decided to switch the locations of the breadboard and arduino because we realized that between adding the new hardware components, and creating new Schmitt Triggers, we would need much more ease of access to large parts of the breadboard than we had in the previous setup.
A key feature of this reorganization is also going to involve resizing all of the wires so that they fit tightly into the breadboard and are easily traceable from the hardware. In line with this, we hope to keep things as color coded as possible. Eventually, we agreed that we will be making a change away from using a breadboard, into a perf board so that wire connections are again cleaner and more secure. The reason why we are waiting to do this is because we have found that we have needed to move the hardware components around a lot as we enhance Brooklynn’s robotic capabilities and we therefore, need the flexibility of the breadboard.
A final thing that we are considering in our build changes is how to keep our center of mass as low and centered as possible, so as to avoid her tipping over while navigating the maze. In the previous setup, she was getting to be a bit too tall which would be fine if we added weight to her, but this would subsequently compromise her speed. For now, we removed the third level shown in the image above but are still maintaining the height gap between the two base plates just so that we can have ease of access to both the arduino and breadboards. Once we finalize all of the hardware and wiring, we will likely lower the top plate and even consider switching it so that the arduino is on top again since all of the wiring will be sodered onto a perf board by then.
We realized that we will soon have too many analog sensors for the 6 pins on the Arduino Uno. To remedy this, we have begun the implementation of Schmitt triggers. Conceptually, these are comparator circuits that actively convert analog input into digital output. Essentially, we choose threshold voltages to turn “on” and “off” at, and calculate corresponding resistor values from there. We used an Inverting Schmitt Trigger Calculator and used the following base circuit:
We will convert our outer two line sensors and two side wall sensors to digital signals using Schmitt Triggers in order to have enough analog pins for the remaining line sensors, front wall sensor, microphone, and IR treasure sensors.
Our first attempt was to use a non-inverting Schmitt Trigger for all four signals. We read the digital values in the Arduino IDE and printed to the serial monitor. Using the LM358 dual op-amp again, the implemenation is shown below:
This did not work. On the oscilloscope and the serial monitor, we could tell the signal started low and would switch to high when desired, but would never come back down. After discussing with a TA, we realized the feedback loop was somehow interfering with the output signal. We then chose to switch to an inverting Schmitt Trigger at this point.
#include <Servo.h>
Servo leftservo;
Servo rightservo;
int inLeft;
int inRight;
int outLeft;
int outRight;
void setup() {
// initialize serial communication at 9600 bits per second:
Serial.begin(9600);
pinMode(9,OUTPUT);
pinMode(10,OUTPUT);
leftservo.attach(9);
rightservo.attach(10);
}
// the loop routine runs over and over again forever:
void loop() {
// read the input on analog pin 0:
readSensor();
while (outLeft < 900 || outRight < 900){ //while both outer sensors see white
if (abs(inLeft-inRight)<70){ //if inner are similar
leftservo.write(180);
rightservo.write(0);
}
else if (inLeft>inRight){ //if tilted left, correct
leftservo.write(90);
rightservo.write(0);
}
else if (inRight>inLeft){ //if tilted right
leftservo.write(180);
rightservo.write(90);
}
readSensor();
}
if (analogRead(A5) > 200) { //If at an intersection and a wall is detected in front
left();
delay(500);
while(outLeft > 900 || outRight > 900) { //while out sees black, in sees white
left();
}
while(inLeft < 900 && inRight < 900) { //while out sees black, in sees white
left();
}
}
else{ // if there is no wall in front at the intersection, drive across
forward();
delay(500);
}
}
void forward() {
leftservo.write(180);
rightservo.write(0);
readSensor();
}
void left() {
leftservo.write(85);
rightservo.write(0);
readSensor();
}
void readSensor(){
inLeft = analogRead(A0);
inRight = analogRead(A1);
outLeft = analogRead(A2);
outRight = analogRead(A3);
}
Our objective for Milestone 3 was to implement a search algorithm in simulation as well as in real life. In both, the robot must display a “done” signal at the end of the search (i.e. once all explorable squares have been visited).
We chose to write our simulation in Python, due to the simplicity of interfacing with graphics. We used the Pygame library for the display, which is often used for simple game development. To create the virtual environment that would simulate the real-life maze, we wrote two classes. The class Square would create each “square” (i.e. intersection in the maze) with attributes describing its (x,y) coordinate, index (from 0-19), and if any walls surround the square.
class Square():
"""
Instance is each grid square.
"""
def __init__(self, x, y, index, right, down, left, up):
self.x = x
self.y = y
self.index = index
self.right = right
self.down = down
self.left = left
self.up = up
The Maze class initializes the maze itself, which is a 2D array of Square objects. There are functions within this class that assist with its usability. makeSquares() creates the 2D array, assigning the coordinates and indicies; and setupWalls() uses two presumably assigned “horizontal wall” and “vertical wall” 2D arrays to assign each Square’s directional wall attributes.
def makeSquares(self):
x = 0
y = 0
index = 0
right = 0
down = 0
left = 0
up = 0
for row in range(ROWS):
for col in range(COLS):
self.squares[row][col] = (Square(row, col, index, right, down, left, up))
index = index + 1
def setupWalls(self, hwall, vwall):
for row in range(ROWS):
for col in range(COLS):
if hwall[row][col] == 1:
self.squares[row][col].up = 1
if hwall[row + 1][col] == 1:
self.squares[row][col].down = 1
if vwall[row][col] == 1:
self.squares[row][col].left = 1
if vwall[row][col + 1] == 1:
self.squares[row][col].right = 1
After initializing the squares, Maze, walls, and the Pygame library, we define two more functions--depth first search, and drawing the maze. The drawing function simply interfaces with the Pygame library, setting the color of the squares, and drawing in walls if applicable.
Depth first search seemed like the natural go-to search algorithm, as a robot can efficiently continue down a path, but it cannot teleport to diagonal squares like breadth-first would require. Since Python allows modification of the size of lists, we initialize three lists to use as stacks--visited (i.e. which squares the robot has gone to), a frontier (in order, the next squares to visit), and a path (in order, the squares visited) to facilitate backtracking when it is needed. First, we implemented the search itself with no real-life analog, as "paths" were not taken into account, only visited versus unvisited squares (additionally, at this point, we were not sure how to display the walls).
To more realistically simulate the robot, we needed to not have the "current" location teleport to nonadjacent squares. To do this, we used a variable "goback" to determine if a dead-end is hit, whether with walls or due to all adjacent squares being visited. Our next attempt at this was to pop things back off of the visited stack until unvisited squares were visited, but this posed this issue where, due to the adjacency of squares, added squares to the frontier multiple times. The robot would then go back to squares unnecessarily. Here is an example:
Starting by adding the first, start square to the frontier, if going back is not necessary, we set a current variable to be what is popped off the frontier. We set this to be blue (against the green of visited squares) to signify it is the current square. Putting it in the path, additionally, if this square is not already visited, we put it in visited.
To determine how to add squares to the frontier, we set a definitive priority–most prioritized was east, then south, west, and north (as we set our grid to be landscape mode, with our start square in the top left). The simulation checks if there is a wall in a direction, and if not, set next to that square. This more realistically simulates the robot, as it will not know whether it is surrounded by walls or not until it physically gets to the square–so our simulation “uncovers” walls as it goes. If the square should be visited, it is added to the frontier.
if current.up == 0: #if no wall to top
next = squares[current.x - 1][current.y] #up one
hwall[current.x][current.y] = 0
if next not in visited:
frontier.append(next)
This is repeated for each direction. If nothing was appended to the frontier, backtracking is required. In this situation, for each node in the frontier, if it’s been visited, it can be removed from the frontier. If there is anything left in the frontier, the top of the path (i.e. the most recently visited square) is appended, such that popping off the modified frontier will effectively backtrack on the previously taken path. This continues until the robot reaches its goal, or the next unexplored square. Once at this goal, goback returns to 0.
for node in frontier:
if (node in visited):
frontier.remove(node)
tmp2 = path.pop()
if (len(frontier) != 0):
frontier.append(tmp2)
goback = 0
At the end, the screen is cleared and “DONE” in text is displayed. Here are a few videos of the simulation:
In the future, we would like to implement Dijkstra’s algorithm, particularly in relation to finding a path across the maze to an unvisited square when backtracking is required.
The concept for real-life implementation was to implement the bulk of it based on the Python simulation. We started from our robot base code that could sense walls and follow lines. In emulating our Square class, we created a Square struct with x and y coordinates.
typedef struct{
int x; int y;
} Square;
We utilized the StackArray library to implement a dynamic stack for our frontier and path. To keep track of the visited squares, we created an array of size 20 (the maximum amount of vistable squares in our 4x5 grid). We initialize our start square to (0,0), and push it to the frontier.
The main change between our simulation and real-life implementation is keeping track of the robot’s orientation, to properly turn to the next square. To do this, we kept track of the direction the robot with a variable, then used it in conjunction with the coordinates the robot was at and the coordinates the robot was to go next. From the coordinates, we determined if the robot was going north, south, east or west in reference to our maze map. This would give us both the cardinal direction and new orientation of the robot. 0 corresponded to north, 1 to east, 2 to south, and 3 to west.
char reorient(char current[], char next[], char curr_o) {
char next_o = 0;
char diff[] = {0, 0};
diff[0] = next[0]-current[0];
diff[1] = next[1]-current[1];
if (diff[0] == -1){ //north
next_o = 0;
}
else if (diff[0] == 1){ // south
next_o = 2;
}
else if (diff[1] == -1){ // west
next_o = 3;
}
else if (diff[1] == 1){ // east
next_o = 1;
}
The next part of the code calculated how the robot should turn based on how its orientation should change. With the cardinal number system described above, we can determine whether the robot should go straight, turn left or right, or turn around by subracting the new orientation from the current orientation. A table below with the corresponding values calculated describes how the robot acts, and the next part of the code below shows this in action.
Calculated Value | Action |
---|---|
0 | Straight |
-1, 3 | Left |
1, -3 | Right |
-2, 2 | Turn Around (Flip) |
if (next_o - curr_o == 0){
//straight
forward();
delay(200);
}
else if (next_o - curr_o == 1 || next_o - curr_o == -3){
//turn right
right();
delay(800);
}
else if (abs(next_o - curr_o) == 2){
//flip
flip();
delay(1250);
}
else if (next_o - curr_o == -1 || next_o - curr_o == 3){
//turn left
left();
delay(800);
}
curr_o = next_o;
Serial.println(curr_o);
stp();
return curr_o;
}
Finally, the robot updates its new orientation and returns to line following code. To test this, we manually wrote what each next square should be. A video of the code being tested can be found below.
Our reorientation code requires inputs of a current square, the next square, and the current orientation. These are given from our depth first search implementation. The DFS function follows the logic of our simulation, referring to each square by its (x,y) coordinates. While the frontier is not empty, if backtracking is not required, we find the current node, check if it is visited, and visit it if it hasn’t been. Again, it is also added to the path stack.
To determine the coordinates of the next square, the wall sensors are checked. If there is no wall, the corresponding direction determines the next square to visit. If it has not yet been visited, it is added to the frontier. This happens in reverse order of our priority, and is repeated for each direction.
if(wallL == 1){ //no wall on left
if(orient == 3){
next.x = current.x + 1;
next.y = current.y;
}
else if(orient == 2){
next.x = current.x;
next.y = current.y + 1;
}
else if(orient == 1){
next.x = current.x - 1;
next.y = current.y;
}
else if(orient == 0){
next.x = current.x;
next.y = current.y - 1;
}
if(!isMember(next, visited, visitedSize)){
frontier.push(next);
}
}
...
isMember() is a function to simply check if a square has been visited. The definition of this function uses squareCompare(), which checks if the (x,y) coordinates of the two squares are equivalent. At this point, we wanted to check our basic DFS without walls was working. Here is the test:
After determining this was successful, we returned to the code. At the end of checking the wall sensors (of which we have three--two on the sides and one in front), if nothing was added to the frontier, the robot needs to backtrack.
Currently, this is as far as we have been able to implement the Arduino code. Outputting the next square to the Serial Monitor, the robot knows where it needs to go, but currently fails to begin the backtracking process. In the future, we will add the implementation of going back on its path, and later, Dijkstra's algorithm instead of pure backtracking in order to find the shortest path to the next unvisited square. For now, we are still able to search the maze up until it needs to go back.
The objective of Milestone 4 is to prepare Brooklynn for the final competition: displaying walls and treasures as the robot explores, and displaying a done signal as well as playing a done tone when the maze has been fully mapped. Additionally, we spent a significant portion of time completing unfinished tasks and improving previous work.
For this milestone, one of the first tasks that we worked on was completing the depth first search code to properly complete backtracking. When finishing up this part of the code, we realized that she was perfectly capable of exploring the entire maze using dsf when there was at least one square of the maze that was closed off. However, as soon as we opened up the maze so that every square in the 4x5 grid could be explored, she would malfunction and reset the code before reaching the very last square. At first we thought it was a memory problem but after writing in code to check the memory of the data we were storing and outputting, we realized that we were nowhere near the capacity. Finally, we ended up realizing that the problem relied on the size of the array that we initialized the maze to be. Although we made the array size 20 because there are 20 squares in the grid, we realized that the array size actually needs to be 21. This is because the way our code works, we have Brooklynn initialized to start on an “imaginary” square at location (0, -1) so that when it encounters the very first intersection at (0,0), it can go through the proper procedure of adding the available options to the frontier. After changing the size of this array, we found that the dfs code was fully functional.
For the final competition, we want our backtracking algorithm to implement Dijkstra’s algorithm and backtrack using the shortest path instead of the path it had originally taken to get there. The shortest back algorithm is pretty straightforward - once the robot gets to the dead end, it would check the current node that it is on and check the next node in the frontier and do a breadth first search to find the shortest path to the next node from the current position. Once it finds the path, Brooklynn would simply folow the path until she reaches the next node and then she will continue with dfs. The way we plan on holding the information about the maze on the robot is with a struct, called Squarewalls, that has fields called n, w, s, and e. These fields represent the north, west, south, and east sides in an intersection. We will make a 2D array of the struct Squarewalls to hold the maze information. As of now, we have begun implementing the shortest path algorithm but it is not yet complete.
Instead of shorting the treasure sensors together, we decided to toggle quickly between analog pins 3 and 4 on each iteration of the FFT. We did so through the following code:
if (ADMUX == 0x43){
ADMUX = 0x44; //a4
DIDR0 = 0x05;
}
else if (ADMUX == 0x44){
ADMUX = 0x43; //a3
DIDR0 = 0x04;
}
This data is collected on the robot and will be sent via radio to the base station.
At every intersection, Brooklynn is programmed to check for the presence of treasures. Within DFS, she also checks for walls in order to add squares to the frontier–this is when we also update the walls variable. Her update to the walls depends on its orientation, as in Milestone 3. Shifting the data as usual, the packet is sent through radio. The receiving Arduino receives the data and sends it to the FPGA through SPI and the treasures should update on the display.
To install the IR sensors onto the hardware, we first had to solder and heat shrink wires onto the sensors. We then designed and 3D printed the mount shown below for the IR sensor to be held in. We especially wanted the hollow part of the piece to be long so that the sodered and heat shrunk parts of the wires would be protected.
We have reformatted our packets to send the data we will actually need to display the correct data. We are using a 16-bit packet, like so:
valid | done | 17kHz | 12kHz | 7kHz | north | east | south | west | orientation | orientation | x | x | x | y | y
We use a valid bit in order to determine which data to “throw away,” such as any startup inconsistencies or the 16b’0 signal that may send at the beginning. The done bit is self-explanatory–this will be 1 when the robot has finished exploration and 0 otherwise. The following three bits describe if the robot has detected a treasure, and the respective bits describe the frequency. The next four bits describe if the robot has detected walls, and what direction the wall is in. Orientation follows the same guidelines as we used in Milestone, as do the x and y coordinates.
In Lab 4, we utilized parallel communication, under the assumption it would be a simpler implementation. We recognized that we would not have enough digital pins to do so, and made the switch to SPI (which uses less pins on the Arduino) for this milestone.
Our SPI uses three main lines–Master Out Slave In (MOSI), Clock (SCK), and Slave Select (SS). We connected the receiving Arduino to the FPGA. When the SS pin is low, it communicates with the master (i.e. the Arduino). A new driver was written to read in the data from the radio module, check which slave to send to, send the data, and check for a successful transfer. We then stop SPI and set the SS pin high again to end communication.
Debugging:
Oddly, we were having an issue with using the MISO pin, despite there only being one master and one slave. When the line was connected to the Arduino, data transmission would stop. Unplugging the MISO pin allowed the transmission to continue as usual. Considering that we don’t need to send data from the slave to the master, we have just left it unplugged.
We wanted to create a tune instead of having a plain finish tone. We transcribed “Take Me Out to the Ball Game” and defined the C5-C6 major scale’s frequencies for convenient use. From there, since we are only using 24 beats of the song, we modified our sound generation code from Lab 3 to check which note values (i.e. “indexed” beat of the song) corresponded to which notes in the scale. Here is a snippet of our song definition code:
if (note == 0 || note == 1 || note == 12 || note == 13) begin //C5
count <= CLKDIVIDER_C5 - 16'b1;
note <= note + 5'b1;
end
else if (note == 9 || note == 10 || note == 11 ) begin //D5
count <= CLKDIVIDER_D5 - 16'b1;
note <= note + 5'b1;
end
...
And so on. The actual tune can be heard in this video (the first C5 is hard to hear):
Once the FPGA receives this packet of data, it parses the data back into the respective sections, e.g. valid bit, done, treasures, etc. A new driver was written to update the maze graphic. After initializing our maze with walls to zeros, we iteratively check if each square needs to be updated, and if not, to keep it the same. This is where we update the color of the square if a treasure is present:
if ( treasures == 3'd0 ) begin //no treasures
if ( maze_state[maze_x][maze_y] == 3'd0 ) maze_state[maze_x][maze_y] <= 3'd1;
else maze_state[maze_x][maze_y] <= maze_state[maze_x][maze_y];
end
else begin
case(treasures)
3'b001: maze_state[maze_x][maze_y] <= 3'd4; //7kHz
3'b010: maze_state[maze_x][maze_y] <= 3'd5; //12kHz
3'b100: maze_state[maze_x][maze_y] <= 3'd6; //17kHz
default: maze_state[maze_x][maze_y] <= 3'd7;
endcase
end
Without this iteration, an inferred latch occurs. The walls are assigned based on the location of the square, the orientation of the robot, and the wall data received through SPI. Once everything is assigned, the colors are updated to reflect the respective state of the square or wall.
To test if the graphics were updating correctly based on the data, we created a MATLAB script to create packets that would simulate our manually assigned mazes. Below is a partial maze test.
After working out the bugs, we added an arrow to show the current orientation. The following map corresponds to the successful run below it:
We currently have an issue with the top row not displaying walls correctly. However, the maze display otherwise responds correctly to the inputs. We plan to fix this error before the final competition.
After completing both the display and finish tune, we integrated both together with much success. The display shows the robot complete the maze, then turns green and plays "Take Me Out to The Ball Game." The video can be seen below.
Debugging:
We had a lot of issues with displaying the data. Our primary issue was seemingly arbitrary walls and squares showing up when they had been neither explored nor called at all. The root of this issue had been the clock speeds, in which we were running our grid driver slower than SPI and VGA output. The changing signals became stable once we set the clock speeds to be the same.
While we didn’t quite have time to combine all of the components of this milestone, we have done enough unit testing to be confident that we will be able to integrate the radio sending code for the final competition.
Additionally, we have been working on implementing Dijkstra’s algorithm to backtrack more efficiently. This is mostly functional, as can be seen in the videos below:
However, they do not fully finish, and would then fail to send a done signal. This is why we chose to revert to our functional DFS code with backtracking for this milestone, but we will finish up Dijkstra’s later.
Although the display is (mostly) functional, the solid colors used are generally unappealing. We plan to change the graphics to use images for the ground, background, and walls. We also want to display our robot with a more appealing icon, instead of just an arrow. In addition, the small squares between walls will be addressed to blend in with walls and/or the ground to make the grid appear more seamless.
Finally, we plan to clean up the wiring on Brooklynn to avoid catching on things as well as simple aesthetics, and are ordering faster servo motors to replace the slower ones we are currently using. We are also getting voltage regulators to use in conjunction with the new servos.
In our last week of the course, our goal was to finalize Brooklynn for the competition, from integrating more modules completed in previous labs and milestones (e.g. the microphone) to improving the existing system.
In designing a display of real time maze mapping, it seemed like a good idea to include the robot's current orientation, especially given that the robot would already be storing this information. We also were using 16 bit packets and had plenty of space left in them to include it. Including this information also allowed for extra protections against mapping false walls or lack thereof. This was done by not updating any wall data from the opposite direction that the robot was facing in the square. We had issues with some maze walls not appearing correctly that vexed us for a while, but eventually we figured out that it was a stray misplaced begin/end pair and a set of incorrect max indices.
For the final version of the display we wanted to chose images that would both create a more unified theme, as well as look good given the resolution and color limitations. Because our robot's name is Brooklynn, we wanted to go for a more urban theme.
As the most dominant image, the background picture was the most important. We ended up going with a stamped sheet metal texture for a number of reasons. It is a simple, recognizable urban/industrial flooring texture, which works well with both the theme and the lower resolution image requirement. It also worked well for our color needs in that it does not require too much variation in the blues and as a gray none of the colors were saturated, allowing the treasure colors to be added on top. Because of its repetitive nature, this texture also worked well for tiling.
The image for the walls was the most difficult to choose, but in the end we decided to go with an abstract texture which was more uniform and has high contrast to the background image. We went with thin overlapping lines because it seemed like a good balance between industrial and tech-y.
For the sake of clarity, we chose to have the robot represented as an arrow so that it would be easy to tell which way it was oriented. To keep the image overhead down, we used conditional indexing in order to display the arrow in the right orientation and to replace the arrow background with the background texture image. This also allowed us to try many different combinations of robot images and backgrounds without having to remake the color list files which stored the image data.
Our previous done signal had been to turn the whole screen green, but we decided that this was not ideal as it made the final maze no longer visible. To remedy this issue we changed to have the sides of the screen turn green instead, leaving the maze intact, and changing the blank image for all unvisited squares to an X to show that those were unvisitable. we had a no entry sign, but it was pointed out to us that this did not look good with the green on the sides, so we switched to the X and it went much better with our color theme.
Our shortest path algorithm only considers visited locations in the path, as there is no information about unexplored areas. While this is oftentimes the same path as depth-first-search, this is not always the case.
To find the shortest path, an array is initialized and set to the max values. A queue of Squares is also initialized, to which the current square is added--while this queue is not empty or a variable named dontstop
is true, we check if the current square is visited. If it is not, we make it visited. If the destination square is the current one, dontstop
is set to false. The best path array is updated to include the shortest path to each node from the current one. Finally, the shortest path is added to the actual intended path for Brooklynn to take. This whole processes is looped through until the desired node has been mapped. A few test runs of this search are seen below:
The basis of this work was completed in Lab 4 and Milestone 4. We created a helper function called radio_send
that created the packets from the current position, wall, treasure, orientation, and state data (e.g. valid and/or done), and then sent it.
Because we ran into issues with some packets failing to send, we sent the packets of data multiple times at each intersection of the maze after it searched for treasures. Although not all the packets were received, because the sets of packets contained the same data, the base station still generally received all the necessary data. With the time constraint, we were unable to discover the root of this issue.
In addition to the failing packets, we had some issues with the robot sending packets which were mostly 1s. This created problems for the FPGA because it would receive a false done signal, often when the robot was just powering up, being programmed, or being reset. In order to keep this from interfering with the FPGA, we both setup safeguards in the Verilog code instructing it to ignore a done signal if the packet also said that all three treasures had been detected in that square. To further safeguard, the receiving Arduino had a conditional to not send the packet over SPI if there was a done signal and the x coordinate (width side of maze, 0 indexed) was greater than 4. While we did find the source of the packets with many 1s that were occurring while in the middle of mapping the maze (a negative number), these safeguards did help get rid of the startup junk packets of all 1s.
For the final competition we mounted our second treasure sensor so that Brooklynn could detect treasures on both sides. Our treasure sensing code was largely unmodified from our Lab 2 implementation using the OpenMusicLabs FFT library. We used 64 samples instead of the initial 256 to save on memory, changing our intended 7kHz, 12kHz, and 17kHz bins to 12, 20, and 28 respectively instead of the bins calculated from Lab 2. The key component of integration needed was resetting the initial values of ADCSRA, ADMUX, DIDR0, and TIMSK0 to their initial state after calling our find_treasures
function, which modified them. Because of the two treasure sensors, we switched between analog pins 3 and 4 (by changing ADMUX from 0x43 to 0x44) before running the FFT. This code is run through four times at each intersection--two times for each side.
In addition to checking for treasures while generally traversing the maze, we needed to cover the edge cases of treasures at dead ends or on a wall not looked at in the last visited square of the maze. In order to cover the dead end case, we made sure that the robot would stop half way through a 180 degree turn, check for treasures, and send a packet to the base station via the radio. For the case of unchecked walls in the final square we added turning and checking for treasures as a required step before sending the done signal after completing the maze.
The key component missing from Milestone 4 was starting on the 660 Hz tone. Since we had already created a bandpass filter in Lab 2, we simply utilized our understanding of the FFT to check for the bin that included the 660 Hz tone--which, by our calculations, was bin 5. Using the same method as checking for treasures (i.e. returning true when our chosen threshold was surpassed), we created a helper function check_mic
. The result of calling this function was stored in a variable start_signal
that indicated when a 660 Hz tone was detected, which, when true, would allow the search functionality to begin.
For security, we also added a pushbutton to serve as a microphone override button. In case Brooklynn failed to start on the 660 Hz tone, pushing the button also set start_signal
to true.
Given all of the interconnected parts of the base station, it seemed like a good idea for us to create a way to keep all of the pieces together and secure when moving it around. At first, we strongly considered soldering the breadboarded portions (voltage dividers and DAC), but given time constraints and the fact that there would still be multiple unconnected pieces, this idea was abandoned. After seeing another team using a piece of cardboard to move their base station around, we decided to mounted the FPGA and Arduino to it using screws, and then used zip-ties to attach the breadboard and audio jack. All of this helped make it easier to move, make sure wires would be less likely to come dislodged in transit, and also made it look nicer.
Before the final competition, we decided to give our website a complete overhaul. This meant converting all of our documents (previously written in markdown) into html. Instead of links, the website is also now a single page that uses modals, a type of popup window, to display all of our documents. In addition, we now have a header bar that contains shortcuts to each major section of the website, as well as a link to our YouTube page. All in all, the website appears cleaner, is easier to navigate, and allows us to display much more information in an organized manner.
With all the features we had been adding to Brooklynn throughout the semester, our robot’s wiring became incredibly messy. We cleaned it up by removing all of our lengthier wires and replacing them with shorter ones. This made it easier to access any different parts of the robot, as well as prevent wires from being accidentally ripped out of their corresponding sockets. We also added labels on each of the wires to make any potential accidental unplugging easier to fix. In addition, the labels made our wiring appear more organized, since they could all be easily identified without having to be traced back to the arduino and referenced with our code. A final schematic of Brooklynn is included below.
Initially, we decided on using the first two lines from the song “Take Me Out To The Ballgame.” We decided, however, that this didn’t fit the style we were going for, and chose to use a less niche song for our done signal. The team finalized on the song “All I Want For Christmas Is You.” To keep things simple, we kept our original scale and transcribed our new song using the corresponding notes available. The final song felt much more fitting given the timing of the competition and the need for something lighthearted and positive after a comprehensive and challenging semester.
There were many upgrades we attempted for the competition that fell through as either inconsistent, hard to integrate, or too time-consuming. For the final competition, we relied on the components that had reliably worked in the past instead of the upgrades that were not perfected.
We chose to purchase the High Speed Parallax servos, which run on 6-8V instead of 5V, and at speeds up to 180RPM. This involved purchasing new voltage regulators that would output within that range, drawing from a 9V battery.
We had to use two separate voltage regulators--one for each servo--due to the rating for the regulators compared to the current draw of the servos. While the abo circuit worked and the servos were able to run on the 6.2V output, they took too much time to accelerate and decelerate and were thus less predictable. Line following became erratic due to each movement being an overcorrection. We switched back to the regular Parallax servos distributed in class for competition.
For the microphone, we briefly attempted to use the Goertzel algorithm instead of FFT due to the simplicity of the code. We thought this would require less memory due to the significantly lower sampling rate--but it did not integrate well into our code. With familiarity with the FFT code, we chose to sample at 64 samples instead of the initial 256 to decrease memory usage.
Additionally, while soldering the circuits for our filters and amplifiers to perfboards seemed ideal to avoid accidentally pulling out wires and organizing the system, between the shortage of components in the lab and the time it would have taken to unit test the re-done circuits, we decided our time would be put to better use in other ways.
Part | Cost Per Part | Quantity | Total |
---|---|---|---|
Line Sensor | $3 | 4 | $12 |
IR Sensor | $7 | 3 | $21 |
Parallax Servo | $13 | 2 | $26 |
Linear Regulator | $0.37 | 3 | $1.11 |
High Speed Parallax Servo | $16.99 | 2 | $33.98 |
Shipping & Taxes | $4.79 | $4.79 | |
Total | $98.88 |
For the most part, we used the standard number of each part when building Brooklynn. This included 3 IR sensors for seeing walls and 2 parallax servos for movement. We decided to use 4 line sensors on our robot, while some other teams chose to rely on as few as 3 and as many as 5 sensors. Parts such as resistors, wires, breadboards, and capacitors did not count towards our budget. We also 3D printed a couple of small components, but did not count those towards the final cost either. The parts that made it on the robot equated to exactly $60, nearly $40 less than our final budget. This is due to us buying (and not using) 2 aftermarket servos and corresponding linear regulators to power the servos. Although they went unused, the component costs (and shipping) must be included in our budget. Overall, we were $1.12 under our $100 budget.
At the final competition, we were cautiously optimistic about how Brooklyn would fare against the competition. In theory, we had everything working, and a perfect score was possible. We had several 9V batteries on standby and charged our on-board battery in preparation for the competition. Unfortunately though, things did not go as expected. For the first round, our robot seemingly couldn’t properly follow a line. It would overcorrect when it moved slightly off the black tape, and this would lead to Brooklynn to veer off the track into walls. We tried several times to reset the robot and see if it could complete the maze on its own, but after what felt like dozens of failures, we resorted to physically correcting the robot whenever it began to veer off course. This ultimately prevented us from proceeding to the final rounds.
Before our second round, we went back to the drawing board to determine what went wrong during Brooklynn’s first run. Everything was working the night before, so what could be the issue now? Since nothing changed with our code overnight, we believed that our issue had to have been caused by a physical change on the robot. The only things changed were our batteries, so we identified that our issue had to be down to something with them. Since the robot had been overcorrecting, and the on board battery had just been fully charged overnight, our conclusion was that the battery had been overcharged and was sending a higher voltage than our servos were calibrated for. We couldn’t change the battery at this point, so we modified the line following code to compensate for the higher voltage and turn the same amount it had been throughout the semester. We then tested our changes to make sure Brooklynn could once again follow a line, and were ready for the second round.
The second round was much better in terms of performance. Brooklynn completed the maze, found all treasures, and sent a done signal. Our only mistake was a missed packet which resulted in a missing square on our screen. Overall, however, everything went according to plan. We received almost all possible points, and had a highly successful run as seen in the video below. Had the first round gone so smoothly, we probably would have made it to the final round too.
There were other facets of the competition to mention as well. Our team won a 3D printed bear for best verilog code, mainly thanks to Emmett. Emmett was our go-to for anything verilog, and towards the end of the semester even the TA’s would send others to him for assistance on their verilog code. We also decided to run our robot in the final maze after the competition, just to see how we would’ve fared (a video of this is shown below). Brooklynn completed the maze perfectly, not even missing a packet this time. Although we were upset to not have made it to the final round, it was good to know that our team was successful in completing all tasks in time for the competition.
The semester of ECE 3400 was filled with ups and downs, but in the end we were able to complete everything that was asked for, despite the results at the competition. Ideally, we would’ve wanted our high speed parallax servos to work the way we liked, and for our robot to be more reliable at detecting treasures instead of requiring to stop at every intersection to detect a treasure, but our final version of Brooklynn performed up to task. We did incredibly well with our display and VGA code thanks to Emmett, and David was able to successfully implement Dijkstra’s algorithm to make the robot as efficient at navigating the maze as possible.
And a final and huge thank you to the TAs and Kirstin! Between our truly endless hours spent in PHL427 and our infinite stream of questions, there is no way Brooklynn could have come near to completion without your help. Thank you for clearing up our confusions in lab, helping us in the adventure of 3D printing for the first time, guiding us through long and painful debugging processes, and lecturing on helpful and interesting material--to say the least. It’s been a really fun and educational semester for all of us. Thank you for making ECE 3400 what it is.