Robotics
Innovation Hub Guide Robot
The task for this project was to construct a tour guide robot to give tours of a building on campus. The robot needed to be autonomous and had to be able to avoid collisions with objects or people. Our robot is made of a combination of 3D printed and formed powder-coated sheet metal parts.
My task on this team was to create obstacle detection and control algorithms. For obstacle detection, the robot uses an Intel d435 depth sensing stereo camera. Using Python's OpenCV library, we implemented an object detection program that outputs the minimum distance to any object in three vertical columns that divide the camera's field of view. This allows the control algorithm to understand how far away objects are, as well as how wide they are. The object detection algorithm also uses a support vector machine that is trained to detect people. Using a Histogram of Oriented Gradients image filter, the support vector machine works as a binary classifier to detect if humans are present in the input image. This allows the robot to give audible warnings to people who may be in the robot's path.
For autonomous navigation, the robot uses an ultrasonic sensor system purchased from Marvelminds. Beacons are mounted inside the building, and they send pulses back and forth to a beacon mounted on the robot. The x and y coordinates of the robot are then sent to the control algorithm, along with the desired x and y location of the next stop on the tour.
The control algorithm is a fuzzy logic controller that uses linguistic variables with tunable membership functions to autonomously adjust the speed and heading of the robot. It takes seven inputs: minimum depth to any object in each column of the camera's field of view, x and y position of the robot, and x and y position of the next stop on the tour. The controller outputs two numbers: speed and heading. These numbers are sent through serial data transfer to an Arduino that controls the motors. The speed value is used to adjust the duty cycle for PWM control of the motor speeds. The heading value is multiplied by a tunable heading adjustment coefficient, which is then used to increase the speed of one of the motors so that the robot will turn in the desired direction.
Palletizing Arm
This project demonstrates a palletizing robot arm program that implements a modular approach to palletizing. Our program is comprised of three main steps:
The user is asked to input specifications about the boxes that will be placed onto the pallet. For simplicity, we restricted the program to stack two sizes of boxes, one that is heavier and larger and one that is lighter and smaller. The user inputs the quantity that should be stacked and mass of the larger and smaller objects, as well as a maximum mass that can be stacked onto each pallet.
Our program uses these inputs to fill pallets in such a way that minimizes the total number of pallets required while ensuring that the center of mass of the placed boxes is as close to the center of each pallet as possible.
This generated stacking layout is sent to a motion planner, which creates a motion plan to move the virtual robot to simulate the loading of each pallet.
Our algorithm relies on the implementation of multiple pallet templates that are created to ensure a center of mass at the exact center of the pallet. Upon receiving information about the quantity of boxes that need to be palletized and the maximum mass that can be stacked onto each pallet, the algorithm searches through the templates to find the template that maximizes the mass of each pallet while still being below the specified maximum per pallet. At this point, the algorithm generates as many pallets of this template layout as possible. Once no more can be generated, the algorithm searchers through the templates again to see if any more can be generated using the remaining boxes. Once the remaining boxes do not fit into any of the templates, the last few boxes are placed onto the last pallet in such a way that attempts to minimize the error of the center of mass. This algorithm ensures that only the last one or two pallets will have a suboptimal center of mass placement. Additionally, the use of the templated pallet layouts allows our algorithm to scale very well. Thousands of optimal pallet layouts can be generated very quickly as they can be placed into one of the pre-made templates.
Bat Pack
The Bat Pack was designed to assist visually impaired individuals with navigation, specifically by providing direction-based obstacle detection using a haptic feedback headband. The back pack has four ultrasonic distance sensors attached at different locations: one on the left side, one on the right, one on the front facing forward, and one on the front facing down towards the ground. These sensors measure distances to objects using pulses of sound undetectable by humans. The sensor sends out a pulse and waits to hear the reflection, and then calculates a distance to the object based on the time between the pulse and the reflection and the speed of sound.
The computer in the Bat Pack maps distance values reported by the ultrasonic sensors to the power supplied to vibration motors mounted in the front, left, and right sides of the headband. Objects within a certain range of the sensors trigger the motors to vibrate, and the intensity of this vibration is increased as objects get closer. This gives visually impaired individuals information about their surroundings in a 180° arc, and gives an indication of how far objects are based on vibration intensity.