Software

Introduction


When we designed SynBioBot's software, we focused on the following three goals.

First, "Can we deliever accurate coordinates to the robot for precise movement?"

Second, "Can we modularize each action for versatility in different experiments?"

Third, "Is it easy to utilize for users?"


To reach our first goal,

we used the top camera to deliver approximate coordinates to the robot and used a gripper camera to calibrate the coordinates precisely.

To reach our second goal,

we subdivided the experimental process and modularized the repetitive motions into functions.

To reach our last goal,

We developed the user interface and applied telecommunication so that experiments can be carried out easily and conveniently using SynBioBot anywhere in the world.





Coordinate transfer and correction



  One of the most important parts of automating experiments with robots is to accurately move the robot to the desired location. So, we have to deliver the exact coordinate value to the robot, and we decided to use the camera for that. First, select the seven locations on the floor(Fig 1) that will be the reference points and attach the ArUco markers(Fig 2) to those locations. Afterward, approximate coordinate values are obtained through the camera attached to the top of the table. However, the camera attached to the top does not guarantee the exact coordinate value because it was taken from a distance. To solve this problem, we decided to use one more camera. Using the camera at the end of the gripper the correct coordinate value could be obtained within 1 mm by calibrating the value.


Fig1. Arrangement of ArUco markers
Fig2. ArUco markers


Overall coordinator designation process


 Many coordinate values are needed to automate the experiment using robots. Even pipetting alone requires numerous coordinate values, including the coordinates such as the coordinates of the pipette handle to hold the pipette, the position of the tip, the exact position of the solution to be inhaled and to exhale the solution. If we predetermine all of these coordinate values, the length of the code will be considerably longer, making it difficult to interpret the code intuitively, and the entire development process and maintenance of the system difficult. In addition, in this case, if a sudden change occurs in the experimental environment, it is impossible to actively respond to the change.


  1. 100 % reliability cannot be guaranteed when recognizing objects through deep learning. In particular, this reliability will be even lower due to the nature of experiments that use a lot of transparent objects, such as Petri dishes, and conical tubes. Furthermore, DL requires a long running time, resulting in significant overall experimental time.
  2. If the necessary coordinate values are derived through image processing, it is difficult to guarantee the mm-unit accuracy, required for biological experiments, from those coordinate values. If this error exists, the long experimental process cannot be carried out continuously.
  3. In reality, a long development period is required to obtain high accuracy through deep learning and image processing. But we did not have that much time.

 Therefore, instead of receiving all the coordinates through image processing, we received the coordinates of the seven reference points that could be the basis for other coordinate values, and physically fixed other coordinates to be separated by a predetermined distance from each reference point. In this way, even if each reference point moves slightly, the relative distance between the reference points and other coordinates does not change, so it can respond to some changes in the experimental environment. In addition, if the coordinates of each of the seven reference points are adjusted to ensure reliability, a reliable overall coordinate value can be obtained with only seven coordinate adjustments. This will make it easier to set up 'Synbiobot' by simply fixing the distance between each reference point and the rest of the coordinates when it needs to be installed in another location.
 To support this coordinate designation method, processes 1.2 and 1.3 were implemented. In addition, the Python dictionary type off_data variable and offset_position function were created to make it easy and intuitive to specify variables in relative coordinates.


Get world position


Fig3
Fig4

  "Get world position" is the process of obtaining approximate coordinates using the top camera(Fig 4). Feature information is the area information of the marker, by using that we can get index information of the marker. More specifically, first, extract the index and the top right vertex coordinate values of the ArUco markers using the aruco.detectMarkers method from each frame of the image taken by the top cam. Subsequently, coordinate values are continuously extracted from the frame until the coordinate values of all indexes are measured more than 10 times. Then average the coordinate values of each index. Compare this to the actual corner length to calculate the scale coefficient(Equation 1) for calibrating the scale between the actual position and the position recognized on the camera. By using the scale coefficient, can calibrate to the actual coordinate value(Equation 2).


scale coefficient = (actual edge length) ⁄ (measured edge length)

Equation 1. Scale coefficient

actual position = scale coefficient X measured position

Equation 2. Get actual coordination value


  In addition to the problem of Scale, the problem of eccentricity must be solved. Center offset is used to match the center position of the top cam with the center of the actual robot coordinate system.


  The approximate coordinate value of the upper right vertex of each ArUco marker applied to both methods is then returned and passed to the robot.


Adjust world position


Fig5. Codes for adjust world position.

 “Adjusting world position” is the process of correcting the approximate coordinates received from ‘get world position’ process.
  First, the robot moves in the same position as the picture(Fig 6) to photograph the ground closely and vertically by using a gripper cam. Use the similar code as ‘get world pos’ to get the top right coordinate value of one target ArUco marker using a gripper cam(Fig 7). This time, calculate the difference between the center of the gripper and the top right corner coordinate value of the marker, not the distance away from the center axis of the robot coordinates. 

When the calculated difference is passed on to the robot arm, the robot arm moves by that difference and repeats the same process in that position. the position correction is terminated if the difference in coordinate values is within 1 mm or after more than 5 iterations to avoid falling into an infinite loop. In general, it can deliver accurate coordinate values within 1mm, 2 to 3 times.


Fig6. Adjusting world position motion
Fig7. Gripper cam




Robot motion planning



 The robot does the same thing every time. But our experimental environment is a random environment where we can not guarantee that objects exist in the same place every time. This has been a key challenge that has been raised since the beginning of SynBioBot's development. To solve this problem, we tried to maintain stability as much as possible in a Chaotic experimental environment. These efforts from hardware design have flourished in robot motion codes.
 We also decided to make robots that can be applied to various biological experiments, not just for one experiment. For expandability, analyzed several experimental processes and grouped common actions into a single function. This made it easy and intuitive to write a code of the experimental process using these functions, and versatile in many experiments.
 We created a total of 14 functions, one for coordinate correction, two for coordinate variable designation, and 11 for actual robot motion. Let me explain in detail the seven most important motions.



grip motion


 We combined a series of actions and choose the direction in which we approach objects so that we can actively use them in different motions. We have also made it possible to combine a series of actions (to approach an object, to pick it up, to put it down) and choose the direction in which we approach objects so that we can actively use them in different motions. The grip motion developed in this way is included in cap_open motion, plate_open motion, and get_new_plates motion, making it much easier to develop them. The following are the issues and solutions that occurred when developing grip motion.



Issue 1. Problems colliding with other objects when approaching the target position

 To address this problem, we have added parameters to the grip function that allow us to select the way of approach. In addition to the four-way approaches which are parallel to the x and y axes, we created an incubation approach to access the plate holder of the incubator with a 45-degree difference from the x-axis, and a safe approach to solving the problem of some of the joints colliding with the floor when approached low.
 Also, if the gripper has to enter a narrow space, the gripper collides with the object and fails to enter properly if it approaches fully open. To solve this problem, the width of the gripper was adjusted according to the approach position. The video1 is a video that uses safe mode during the +y-direction approach and approaches half the gripper width to pick up the plate.


Issue 2. The problem which the gripper does not hold the object tight while moving


 This was a problem that would be solved if the gripper held the object stronger. However, due to the feature of the biological experiment, which mainly uses plastic, there was a risk that objects would be damaged if these would be held with too much force. So, we set the motion parameter to move slowly when the gripper is grabbing the object, and then move quickly again at its original speed after the gripper put the object.



Pipette motion


 Pipette motion was the most difficult part of automating biological experiments in the same way as humans. First, high precision was required to insert the tip accurately and inhale/dispense the solution in the correct position. However, it was difficult to expect high precision to hold the pipette only with a 2-finger gripper, not a 5-finger human hand. Also, after inserting the tip, the length of the pipette and tip often caused the robot to fail to reach the desired point because the robot had to move in a higher position than usual, near the operating limit of the robot. Finally, for scalability and convenience, we had to implement pipette motion as a single function, while at the same time implementing some actions to be independently executable. Following is an explanation of how we solved these problems.



Issue 1. Problem with the varying height of the tip of the pipette

 When the button is operated by adjusting the width of the gripper, the height of the pipette end held at the end of the gripper naturally changes. As a result, the pipette button has a moving distance of about 18 mm, and the moving distance at the tip of the pipette is about 9 mm, which is half of that. This is a significant problem when the pipette should accurately inhale the solution at a certain height of the plate or conical tube.
 To solve this problem, when the gripper moves, the robot arm moves at the exact distance in the opposite direction of the same speed. The height change of the pipette caused by the gripper's movement is compensated by the movement of the robot arm. If the button of the pipette is pressed, the height of the tip of the pipette increases so moves it downward. Likewise, if the button of the pipette is released, the height of the tip of the pipette decreases, so move it upward. As a result, as shown in the video, the height of the tip did not change much when the pipette was pressed.



Issue 2. Causing 'Unreachable point error' due to the height of the pipette and tip

 The range of motion of the robot arm is hemispherical concerning the center of the robot arm. Therefore, at high altitudes, the range of motion that the robot arm can move is greatly limited. Once the robot grabbed the top of the pipette using the gripper and inserted the tip below it, it had to raise the gripper's altitude significantly to keep the tip from touching the floor. As a result, 'Unreachable point error' occurred frequently when the robot arm moves to a position beyond its range of movement. Although the robot arm is in a movable position, there has also been a problem that it cannot reach that position after starting from a particular position.
 We adopted a way to move to a movable position first and then to a real position once more. As a result, it was possible to stably pipet without 'Unreachable point error' in all locations.


Issue 3. Problem of pipette shaking

 The pipette must not be shaken to fit the tip by moving only the pipette and to do accurate pipetting. However, reducing the size of the hole in the pipette handle to hold the pipette would make it difficult to pick up the pipette, so the size of the pipette handle hole could not be reduced.
 To solve this problem, as soon as we picked up the pipette, we used a gripper to squeeze the pipette. In addition, even if the pipette contains a solution and cannot be held tightly, the robot tried to minimize shaking by properly applying force to prevent the solution from escaping and adjusting the motion parameter.


Issue 4. Modularization of pipette motion

Pipette motion consists of the following long process.


  1. Pick up the pipette from the pipette holder.
  2. Insert pipette tip to pipette.
  3. Inhale solution from a conical tube
  4. Dispense the solution to the plate.
  5. Remove the tip
  6. Put the pipette in the pipette holder.

 However, this process is not repeated every time. For example, if you want to do pipetting the same solution several times, the tip replacement process may be omitted. In addition, to implement behavior such as pipette mix, it was necessary to allow only some of the total pipette motion to be performed. So, we divided the whole motion into the initial phases of holding and tipping pipettes, the second phases of inhaling solutions, the third phases of dispensing solutions, the fourth phases of removing tips, and the final phases of placing pipettes on the stand so that only the necessary parts of the five phases could be used. This made the process of automating real-world experiments easy and intuitive.


Issue 5. Problems with the varying location of new tips after using pipette tips

 If you use the tip while pipetting, the tip will no longer exist in the position of the pipette tip you have previously used. Therefore, the location of the tip should be updated to use it once more. To do this, when the tip is used, the coordinate value of the stored tip is modified. To do this, we adopted a way that after the tip is used, the coordinate value of the stored tip is modified.



Pipette mix


Pipette mix is the motion that allows the solution to be mixed. It consists of the following steps.


  1. Inhale the solution from the lower side of the inclined plate by using the pipette.
  2. Dispense the solution from the higher side of the plate by using the pipette.
  3. Repeat 2-3 steps to mix the solution.

 For this process, the pipette must be able to inhale and distribute the solution at the correct position. In addition, because it was a pipette-based motion, some of the normal pipette motion had to be used. It was expected to be a very difficult task, but thanks to the well-made pipette motion, we were able to develop it without much difficulty.
 The problem of inhaling and dispensing the solution in the correct position was solved by the same method as issue 1 of pipette motion. In addition, the part of the pipette motion that needs to be repeated is simply implemented by executing the initial phase of the pipette motion, performing the pipette mix, and then executing the final phase of the pipette motion.




Suction motion


suction motion consists of the following steps.


  1. Press the footrest of the suction machine using the toggle clamp.
  2. Pick up the suction machine
  3. Move to the location that will perform suction.
  4. Perform suction.
  5. Put the suction machine down again.
  6. put off the footrest of the suction machine.

 Thanks to the well-designed "Robot Friendly" hardware, it was easy to create a motion even though it was a long process. However, some problems were found using this function in real-world experiments. The problem related to the suction hose, which was the most important problem, is first described.



Issue 1. Control of suction hoses for precise adjustment of the suction position

 The most important factor in suction motion is precisely adjusting the position of the suction position. However, due to the elasticity of the rubber hose of the suction device, the tip position of the suction device varies in each experiment. This made suction in the desired position almost impossible.
 To solve this problem, some of the hoses were secured with cable ties, but this greatly limited the range of motion of the robot's arms while holding the suction machine. So, we found another solution that minimizes movement after lifting the suction. As a result, suction was performed at the correct position in a manner like a method used in pipette motion.



Open Conical Cap


 The cap of the conical tube must be opened by turning, unlike the plate. This requires the robot arm to hold the cap and rotate it in place. However, if it continued to rotate in one direction, the cable at the end of the robot's arm was twisted, and in the worst case, there was a risk of cutting. In addition, the robot controller operates the robot to travel the shortest distance, so there was a problem with rotating in the opposite direction to what we wanted. Finally, rubber was added to the cap to increase the friction of the cap, but due to the viscosity of the rubber, it did not fall off when the cap was placed on the floor, and it was attached to the gripper. Finally, there was a problem that the cap did not fall off when the cap was placed on the floor due to the rubber added to increase the friction of the cap.



Issue 1. Twist issue of cable during rotation

If the end of the robot arm continues to move in one direction, there is a risk that the gripper cable and the webcam attached to the gripper will be twisted or broken. So we solved this in the following way.

Open the cap

  1. Grip the cap with the gripper.
  2. Then rotate the gripper 180 degrees counterclockwise to open the lid a little.
  3. Then release the cap that the gripper was holding.
  4. Rotate the gripper 180 degrees clockwise to return to its original position.
  5. Repeat steps 2-4 again.

We were able to prevent the cables from twisting in this way.


Issue 2. Problems with the robot always moving to the shortest distance

 The controller that conducts robot motion always operates the robot way to move to the shortest distance of the joints. So, it was not easy to rotate it freely and counterclockwise as we wanted. Therefore, to solve this problem, we limited the single rotation angle to 30 degrees. For example, if we want to move the robot arm 180 degrees, it moves 30 degrees six times. This allowed the robot arm to move at the desired angle.


Issue 3. Problems that do not fall off the gripper due to the large friction of the cap

 As a side effect of adding rubber to the cap for the friction of the cap, there was a problem the cap did not fall off. In this case, the cap would not be placed in the intended location but would fall into a random position. If that happens, when the robot arm tries to close the lid later, the cap could not be found in the desired position. To solve this problem, we used the following method. After opening the gripper, it repeatedly moved upward and downward 20mm, reducing the friction between the cap and the gripper. As a result, the cap could fall off which did not fall off easily.



Plate adjustment


 This is a behavior that is not necessary for human experiments. However, the exact location of the plate containing the solution had to be ensured to perform pipette or suction motion in the correct position using the robot. To do this, we added motion that pushes the plate to fit end well into the recesses of the plate holder.
 Before using this motion, the robot vibrates during the lifting and lowering of the plate, and the inertia of the plate itself causes the plate to move in different positions. As a result, Pipetting and suction will be performed in the wrong position. This dramatically reduced the reproducibility of the experiment. However, by using this motion, we were able to ensure the correct position of the plate and significantly increase the reproducibility of the experiment.



Issue 1. The distance to be pushed depends on the presence or absence of a plate lid

 It was a motion that was added to increase reproducibility, so it had to be precise. Therefore, the difference in the diameter of the plate due to the presence or absence of a plate lid, which would not have been a big problem normally, has become an important issue. Therefore, if you need to push the plate with the open lid, you can push it by 4mm more.



Get-new-plates


 "get-new-plates" is the motion that moves the plate from the plate drawer to the plate holder to divide the cells in an experiment such as cell thawing. We developed the get-new-plates function to select the number of plates to pull out. It also updates the position of the next plate that is not taken out each time it is taken out of the drawer. In the case of placing the plate on the plate holder, place it only in a position where the plate does not exist.




Other motions


The description the functions that are not described in detail is as follows


Coordinate correction function

adjust world: A function for correcting the base coordinates that are the reference of the coordinates. It was described in detail earlier.


Function for specifying and moving variables

Functions that specify a coordinate variable can be moved to that coordinate.


  • posture-change: Modifies and stores Euler angle values that specify the direction the gripper places in space. The robot arm can also move to that angle.
  • offset-position: Creates and stores new coordinate values using offset data. The robot arm can also move to the corresponding coordinates.

Robot Motion Function

  • Incubator motion: A function that opens or closes the door of the incubator. When opening the door, the robot arm moves in a circular arc, and when closing the door, it closes by hitting the door strongly.
  • Plate open: Open and close the lid of the plate. There was a problem that the lid did not fall off when the lid was lowered, such as the cap open, so the shaking action was added. Shaking: Pick up the plate, shake it a certain number of times from east to west, and then put it back down
  • Throwing plate: A function for removing plates that are no longer in use on a plate holder. Pick up the plate and move it near the tip removal bin, then open the gripper and throw the plate away



3. GUI & networking



  To create a user-friendly environment, we developed GUI using the tkinter module provided by python among various interface development modules. In addition, we implemented telecommunication using TCP/IP communication method for users around the world to conduct experiments with SynBioBot.


Interface of GUI


  The interface structure of the GUI is as follows.


Fig11

  If the user typed the correct ID and password in gui on the login screen, the user will move on to the next screen. On a later screen.
 the user can choose between a mode that performs the entire experiment at once through the window and a mode that performs the experiment step by step.


Fig 12. Login screen


Full experimental mode

  The full experimental mode is designed to allow the robot to perform all the experimental processes with a single click. Researchers with a good understanding of the experimental process can efficiently reduce the time involved in the experimental process through the full mode experimental mode.

  In addition, the GUI sends out cam information received from the symbiobot's server computer so that the user can view the experimental process in real time through the GUI.


Fig 13. Full experiment mode

Direct control mode

  Direct control mode developed for users who use SynBioBot for training is a mode that performs experiments in stages. Users can move the robot step by step and have a complete understanding of the experiment.


Fig 14. Direct control mode

  When the user clicks on the buttons that are placed in the same position as the working space as shown, the robot arm moves to that position. When the user clicks on the buttons that are placed in the same position as the working space as shown, the robot arm moves to that position. When the robot is completely moved, additional buttons are created. The robot arm then performs the action when the user clicks on one of the buttons to determine which action to perform.


Fig 15. Additional button1
Fig 16. Additional button2
Fig 17. Additional button3
Fig 18. Additional button4

  As with full experimental mode, direct control mode also sends out cam information received from the symbiobot's server computer so that the user can view the experimental process in real time through the GUI.







Communication

Fig 19. Scheme for communication structure

Overall communication

  The robot is driven by a controller only. The controller is an embedded system that cannot use a variety of Python modules, nor can the camera be directly connected. To resolve this, we installed a desktop to use as a server to communicate with the controller.

  We used the TCP/IP communication method as a communication method. And for TCP/IP communication, we used the socket module, which is the Python built-in module.

  Subsequently, the TCP/IP communication method and port forwarding method were used to remotely connect the server computer and GUI. This allows users at long distances to drive the robot through the GUI.

Fig 20. Detailed scheme for message communication

Detailed communication

Communication with GUI, controller, and server pc had to satisfy the following three parts.

  1. Real-time image information should be continuously transmitted from the server computer to the GUI
  2. Message about motion should be sent from GUI to controller and server pc.
  3. After logging in, the GUI and server communication had to be good.

 To satisfy the following three conditions, we used the following method First, the server used a thread module because image information and motion message information must be delivered simultaneously. In addition, the transmission of motion messages had to be done well in the order of the server-GUI-controller. Finally, for security purposes, the connection between the GUI and the server had to be made after logging in. Therefore, we implemented a precise algorithm.