I designed and built an Omnidirectional Modular Robot (OMR) from scratch within a few weeks.
OMR is a 4 wheel omniwheel robot with holonomic drive. This means that the robot can move in any direction, nearly instantaneously. Each wheel is positioned 90 degrees from each other. The picture below illustrates the layout of the drive system.
OMR uses a master I2C mast slave setup. The slaves are the 2 microcontrollers controlling the 2 motor drivers using PWM which controls the 4 motors. Each motor has an absolute encoder which is managed by the microcontrollers. The benefit of having this setup is that it is very modular and only requires SDA and SCL signals to communicate with each other. Because it is a master slave system, the master does not need to wait for the slaves to finish working. This means that there will be very little delay and everything can work concurrently. The master can be a Raspberry Pi or any other microprocessor, so that it can it can handle image processing or even other sensors. OMR also has a 2.4GHz tranciever for uses in communication. The diagram below depicts the layout of the control system.
OMR is going to have a cheap form of a LIDAR, a compass, and maybe an IMU. This allows the robot to have a clear understanding of where it is in the enviroment and possibly do SLAM to map out the enviroment. By having an IMU, encoders, and SLAM, the robot can move quickly and accurately and adjust the speed of each wheel correctly even if the contact with the floor is not even on all wheels. Below is a diagram of a robot localizing itself in any enviroment.
OMR has a cheap analog absolute encoder. It uses two grayscale sensors positioned 90 degrees apart and a gradient circle to generate a sine and cosine signal to get the absolution position of the wheel. Below illustrates how this works.
The sensors did not work as easily as expected; however, I did manage to get quite close and I am still working on improving this. The first time the robot runs, each motors spins a few rotations to get the minimum and maximum values. From this this the angle can be calcaulated. Unfortunately the real data taken from the sensor is depicted below.
The data was clearly not good, so I tried to first fix this phsyically. I found a higher DPI printer and printed a better gradient circle and this was the result.
It improved greatly from physically changing, but it still is not good enough. I then improved it in the software side. The result was that it was a lot closer; however, there is a problem in the minimums still.
I used proportional control on the wheels so that the motors can move at a set angle; however, the problem is that the wheel sometimes jitters back and forth. This can be fixed by using integral and derivative control with it. PID will allow teh motors to move to the right angles and then allow me to move the wheels at a set velocity or even acceleration.
I modeled it in Solidworks and then laser cut it on acrylic. I then tapped the holes. Each module has its own laser cut mount. The mounts all use Lego spacing so that lego can also be mounted on the platform. It is very modular.
I plan to use this for TOBOM and implement SLAM. I got the encoders and I2C communication working and I am now working on adding a compass. Below are some images.
My approach was to collect a few data values for the leg length through trial and error and then obtain a function using an exponential fit in matlab. I collected 8 lengths for 8 desired heights and then used exponential fit in matlab to obtain teh function for any desired height. The function obtained was rest_leg_length = 0.5009*exp(0.04473*height_desired)-0.5657*exp(-8.716*height_desired). Before trying an exponential fit I tried linear and polynomial fits; however, the equations were not as good as an exponential fit. Below illustrates the best fit and the result.
My approach was trial and error. I tried numerous numbers and compared time to oscillation. The result I got was hip_air_k = 30 and hip_air_b = 3. The leg swings to the position quickly and has almost no oscillation. if hip_air_b is increase a little more it will decrease the shaking but increase the time. Below is teh graph of the motion.
My approach was to estimate the position the foot will travel through using the time spent on the ground multiplied by the current velocity over two, for the midpoint. However, from testing I found that the position is actaully less than the midpoint therefore I increased value two divide by. I tried numerous numbers and then fit an equation which made the velocity control better. From having the x position and the y position, the angle was found with arc tangent. However, to add horizantal motion, a gain multiplied by the difference from the desired speed was added to attain motion. This value if too small or too big will lead to problems so I had to test this numerous times. I tried having the gain inside or outside the arc tangent. I found that putting it inside made it hop more intensly at the beginning but levels out to the value faster while putting it inside makes it level more smoothly but takes a longer time. I chose to put it outside. The hip torque was simply found with trial and error until the body did not shake or move anymore. Below shows speeds for 1 and 0.7 with a height of 0.6.
My approach was to only use the image file and go through a series of image processing to detect the centroid and orientation of the object. I first converted the RGB image to a grayscale image and then converted it into a binary image. The binary image was then removed of objects that have less than 10000 pixels. After removing all objects less than 10000, the number of object in the image was found and if it were not one the the required size was increased until one object is left, the target. After isolating the target, the image was then dilated to remove specs and clean up the image. After cleaning up the image, the properties of the image was then found for the object in the image. The centroid was returned and the angle was converted to radians that are from -pi to pi.
My approach was to generate a line between the target and the arm and then generate some resolution of solutions for points on the line while slowing turning the angle of the arm till it reaches the target perfectly. If a solution is not found the angle constrain is relaxed unless it is the final solution. If no solution is found it shows the last position point closest to the target while not holding the target. The end result was a smooth motion from the start to the target while adjusting the angle of the hand.
The video below demonstrates the successes and failures of grabbing a randomly placed target.