Introduction
The 2022 NEFU_China team has built a fully functional and cost-effective hardware automation platform based on our standardized DNA assembly strategy, with human-computer interaction software for the embedded hardware to facilitate the operation. Our hardware enables efficient batch pipetting and bacterial colony picking for build-up experiments.
Based on our experience in DNA construction experiments, a general process to generate a new construct includes plasmid digestion, DNA fragment purification, digested DNA ligation, transformation and colony selection. To sum up the operations needed in these processes, we found that these processes consist of three types of actions: system configuration, constant temperature incubation and positive clone selection. The high degree of consistency among these operations inspired us to pursue standardized assembly methods, and create a high-throughput and automated procession in DNA construction.
For this hardware platform, we only need to provide necessary materials for DNA assembly, including a backbone plasmid, restriction enzymes with suitable buffers, T4 DNA ligase supplied with its buffer. We can program the hardware through the interactive interface to automate the operation sequentially. Afterwards, the platform can process the digestion, ligation, transformation and construct identification automatically by taking appropriate amounts of the supplied reagents, mixing them thoroughly, incubating them at preset temperatures, conducting the transformation of ligated DNA, and picking the colonies to screen for positive clones.
We replace many repetitive human operations in the build process by automating the hardware. The overall design of our hardware is modular and highly scalable, with four modules: the frame module, the head module, the robot arm module, and the vision recognition module. The frame module of the hardware platform uses a high-performance Type 42 stepper motor which, together with our control program code, can operate at high speed and with an accuracy of up to 0.01cm, enabling more efficient and accurate positioning than can be achieved by hand. Together with our tip module, we can automate the adjustment of the pipette to any range, the aspiration and release of the liquid, and the pumping and changing of the tip, simulating and replacing the majority of manual operations in construction experiments. Our hardware allows for the efficient assembly of large numbers of DNA fragments in a short period of time, freeing the experimenter from a large number of repetitive tasks to concentrate on research and discovery.
In contrast to most common pipetting stations, we have built a vision module into our automated construction equipment that combines hardware and a camera to enable visual recognition of microbial colonies to pick through a camera and computer vision algorithms, which encompasses all the operations required for DNA construction experiments, and takes automation to a new level. Our research and experiments have resulted in a computer vision algorithm that not only accurately identifies colonies, but also filters their sizes and classifies their colors according to predefined experimental requirements. Once the experimental settings have been programmed, the entire process eliminates the need of human involvement, significantly reducing the manpower and time in selecting and picking colonies. When testing the platform in practice, we found that the identification and screening rates were very high. The platform also provides high accuracy positioning and pipetting functions, which is perfectly suited to the high accuracy and positivity required in mechanized experiments.
Figure 1. Hardware Assembly
In contrast to most common pipetting stations, we have built a vision module into our automated construction equipment that combines hardware and a camera to enable visual recognition of the picking of bacteria through the 8-megapixel 4K resolution camera and computer vision algorithms, which encompasses all the operations required for construction experiments and takes automation to a new level. Our research and experiments have resulted in a computer vision algorithm that not only accurately identifies colonies but also filters the size of colonies and classifies different colored colonies according to the experimental requirements. Once the experimental requirements have been set, the entire process eliminates the need for human involvement, significantly reducing the manpower and time required to select colonies. We have tested the platform in practice and found that the identification and screening rates are very high, and the platform also provides high accuracy positioning and pipetting functions, which is perfectly suited to the high accuracy and positivity required in mechanized experiments.
|
|
Figure 2. Quick installation of modular cameras
In order to reduce the size and weight for better performance, we have also designed two double-layer PCB boards by ourselves. One is the STM32's core Printed Circuit Board(PCB), and the other is the mainboard that serves the core board. Our PCBs are designed to support higher input voltages and deliver higher currents, allowing for greater compatibility and stability and extending the life of the hardware.
In terms of price, the commercial pipetting stations currently on the market are expensive and the scalability of the equipment is still poor. Our choice is to open source the hardware from the code to the 3D model, which means that you only have to pay for the cost of the raw material and not for any other additional items. Our modular design concept allows for greater scalability and possibilities, but also makes it more affordable to replace failed components and upgrade the hardware in the future, all for only $500! We give you the bill of materials of all the components of the hardware platform , you can purchase through the BOM table and with our manual to build a complete set of hardware equipment, the price required even cheaper than the new iPhone14.
Download The 3D
Figure 3. Model and physical objects in the same view
Figure 4. Project drawing
Open Project
We are open sourcing for the entire iGEM community. We will open source the full source code for each module of the hardware automation platform to GitHub, including the embedded driver code, the control code, and the code for visual recognition. We will also open source the project files for the full 3D model created using SolidWorks software, as well as the project files for the two PCB boards designed using Altium Designer software.
In addition, we will create a detailed operation manual. In the manual, we will guide the user from zero, including assembling the hardware automation platform, burning the code, and the complete steps to use it. This will be followed by a short tutorial that will further guide the user to modify the code to suit the environment and individual needs.
Our full code and engineering projects, as well as a full manual, will be made available to iGEM teams around the world, allowing even synthetic biology enthusiasts with little or no scientific training to build with our hardware after a brief learning curve, with the aim of helping iGEM teams around the world to build more efficiently.
Our hardware project is cross-fertilized with a multi-disciplinary and multi-disciplinary approach, encompassing not only biological ideas but also mechanical design, circuit design, electrical automation, embedded software programming, and other related knowledge. But you don't have to worry about how easy it is to use, we have packaged as much complexity things as possible in our development, and with our user-friendly human-machine interaction software programs and control interfaces, anyone can use our hardware relatively easily. Because we have open-sourced all the content, including the code and design, you can also use our open-source code and manuals to understand how the hardware works and the underlying principles. This hardware can be your first project to learn about the interdisciplinary aspects of biology. We also hope that in the future, this hardware project will become one of the most inspiring projects for young people in science and innovation.
Look at our Github
Device Overview
The overall design of our hardware is modular, divided into four modules: the frame module, the head module, the robot arm module, and the vision recognition module. The modular design also allows the platform to be more flexible and more suitable for universal use, leaving room for future upgrades and reducing the cost of hardware replacement. At the same time, modularity reduces waste and e-waste, reduces carbon emissions, is greener for the environment, and conserves resources in our quest for sustainability.
Figure 5. Overall and module display diagram; A) Overall view; B) Location of each module;
The overall operation process is as follows: firstly, the experimenter sets up the relevant information to build the experiment in our control interface, and then simply copies the set of control instructions automatically generated by the control interface to the human-machine interaction software, and then the human manually clicks on the confirmation to complete the operation, the hardware platform starts to drive the four modules according to the instructions to start working automatically until the set operation tasks are completed.
Each of our four modules is described in the following section.
The Frame Module
The frame module provides a 300mm x 200mm x 280mm volume in which the machine frame can move freely and with sufficient speed and precision. Our frame assembly uses a high-performance ATMEGA2560 microcontroller as the central control and a 256 micro-step interpolation driver IC to achieve high speed and high accuracy control of over 0.01cm, allowing for both speed and accuracy in high throughput experiments. A temperature control platform has been added to the bottom of the module to provide a suitable environment for constant control of the specified temperature.
The overall frame structure of the machine and equipment is borrowed from the CoreXY structure.[1] The movement of the XY axially moving platform is controlled simultaneously by controlling two Type 42 two-phase stepper motors[2] in parallel.
Figure 7.Structural motion decomposition diagram of CoreXY
We have improved the traditional single-axis design for the Z-axis design by adding a set of screw rods and a corresponding stepper motor structure to achieve a synchronous dual Z-axis design. The overall structure has a better performance and more stable load capacity than the traditional single-axis, single-motor design and is better suited to the needs of biological experiments.
Figure 8.Highlight our XY and Z axis designs; A) X and Y axis designs; B) Z axis designs;
In order to provide a suitable environment, we have added a temperature control platform at the bottom, which allows the temperature to be specified within a specific range and maintained at a constant temperature. The temperature control platform is made of an aluminum substrate with internal printed circuitry[3], heated through an external voltage[4], and capable of heating up to 130 watts, allowing for a relatively rapid temperature rise. A 100K thermistor, a microcontroller sampling circuit, an ADC algorithm for data reading, and a PID control algorithm[5] to achieve constant temperature control of the temperature control platform.
The Pipette Module
The pipette module automates the following five functions: clamping of different pipette guns or multi-channel lances, arbitrary adjustment of the range of the pipette gun, step-less adjustable downward pressure on the tip, knocking off the tip of the current pipette gun, and together with the frame module, the replacement of a new tip.
Figure 9.The pipette module
The pipette module uses a stand-alone STM32F103C8T6 single microchip microcomputer as the main overall control chip, with a control program code for automatic movement.
We design the module to fit the mainstream pipette gun models for clamping and fastening. We used 3D printing to produce models. The model is extruded to hold the pipette gun in place and can be attached to the moving platform of the machine frame using the holes reserved at the rear of the model. The modular design allows the pipette module to fit with various pipette guns.
In this section of the arbitrary range adjustment function, we use the stepper motor 35BYJ412 as the main component, which contains the motor power element and a gearbox with a 1:42.5 reduction ratio. Together with the control program code, this motor can achieve an accuracy of 0.132 degrees, which is sufficient to meet the accuracy requirements of the experiment. The input to the 4-phase stepper motor is a pulse signal, which is output in different sequences and frequencies by the control algorithm, allowing the stepper motor to convert the electrical signal into an angular displacement[6] for driving the range knob of the pipette gun in the head assembly. The stepper motors do not work directly from AC or DC power supplies, so we used the ULN2003 driver board to drive them so that the board receives the input signal from the microcontroller before passing it on to the microcontroller[7] . Between the stepper motor and the pipette gun, we use the gearing method to control the angle of rotation of the motor divided by the corresponding reduction ratio to adjust the range of the pipette gun.
To achieve step-less and adjustable downward pressure on the gun head, We use the DS3235 model with 35KG torque PWM digital iron core servo as the main power element[8]. We have also designed a matching lever mechanism to simulate a person manually pressing down on the pipette gun. The lever mechanism can be programmed to rotate through 270 degrees to simulate a human hand gently pressing and re-pressing the pipette gun. The advantage of this design is that it is lightweight, stable, and can easily be used to achieve high transfer speeds.
The function of knocking off the current pipette tip replicates the power element used to press down the tip. The tip can be removed by turning the 35KG servo in the opposite direction. In combination with the frame module, move the pipette gun without the tip to the position where the place for it, and then move the platform upwards to complete the automatic installation of the new tip.
In order to reduce the size of the components for better performance, we designed the STM32F103C8T6's core double-layer PCB board in-house using Altium Designer. This PCB will be used not only in the head module but also in the robot arm module. The reuse of the PCB reduces costs and provides a more environmentally friendly solution that matches our vision of sustainability.
Figure 10.Pictures of our designed PCB core boards A) 2D view of the PCB; B) 3D model view C) Physical view
Our pipette module has undergone several design revisions before finally iterating to the current module. Our current module has sufficient stability in testing, while the high precision components and control code gives the head module objective performance.
The automation of our pipette module, from arbitrary range adjustment to drawing and releasing liquids, and hitting and changing the pipette, has encompassed all the manual actions needed to build the experiment. Our head module has been able to simulate and replace most of the manual operations of build experiments, making the entire process free of human involvement and taking automation to a new level.
The Visual Identification Module
Our visual identification module uses a camera and computer vision algorithms for visual recognition, together with the frame module and the pipette module, to perform the task of picking bacteria. These modules cover all the operations required to build an experiment, achieving a comprehensive range of features that most pipetting workstations can hardly match.
A small industrial camera with 8-megapixel 4k resolution can adapt the frame for stability and take more explicit pictures of the petri dish[9]. The camera does not affect the morphology of the pipette support and the overall frame. For promotional applications, the camera's selection is according to actual needs (e.g., resolution, size, and weight). The code for the visual recognition part
does not require a camera model or brand
. After taking a picture of the petri dish, only the fungal color template and the target colony size range need to be set manually. The code will automatically do the following tasks:
- Select the black area (i.e., the carbon plate).[10]
- Identify the range of the petri dish and remove the interference.
- Load the two colony colors as templates.
- Calculate the average RGB values of the two colors.
- Assign matching thresholds.
- Color two types of colonies separately in the image.[11]
- Remove the overlapping areas and select the colony of the desired size and calibrate the center coordinates.
- Finally, output the center coordinates of each colony.
Figure 11. Physical picture of the modular camera
Figure 12.Visual identification process demonstration
The complete visual identification code is open source in the Open Project section.
Verification of Identification Accuracy
The accuracy of colony screening can be verified by comparing the original image with the program-marked image.
In verifying coordinate accuracy, we used the following scheme: in the original image, the marked image refers to the checkerboard sampling method, divide the petri dish into small squares, select two colonies in the same position, and take the following 10 groups. The distance between the centers of the two colonies was determined manually with the PhotoShop ruler tool in the original image, and the coordinate values given by the program were calculated directly in the marked image. (Unit: pixel)
orignal image marked image
Figure 13.Camera view of the Petri dish
Group | Coordinate of Colony 1 | Coordinate of Colony 2 | Calculated Distance | Measured Distance |
① | (481.0, 215.0) | (489.0, 221.5) | 10.31 | 10.63 |
② | (528.0, 341.0) | (540.5, 360.5) | 23.16 | 23.85 |
③ | (542.5, 79.5) | (555.0, 76.0) | 12.98 | 12.65 |
④ | (360.5, 187.5) | (427.5, 82.5) | 124.56 | 128.70 |
⑤ | (403.5, 203.5) | (386.5, 252.0) | 51.39 | 51.62 |
⑥ | (522.5, 139.5) | (578.5, 119.5) | 59.46 | 63.25 |
⑦ | (566.0, 295.5) | (573.0, 290.5) | 8.60 | 8.60 |
⑧ | (582.0, 276.5) | (611.5, 285.0) | 30.70 | 31.32 |
⑨ | (647.0, 164.5) | (640.5, 256.0) | 91.73 | 93.26 |
⑩ | (489.0, 320.0) | (439.0, 288.0) | 59.36 | 60.54 |
Figure 14.Camera calculation results analysis
As the gif, the accuracy of the coordinates given by the vision recognition code is competent for manual measurement, so we can leave the repetitive work of picking colonies to this automated hardware platform.
The Robotic Arm Module
The robotic arm module has 6 degrees of freedom, and its current use is to be able to grip and move a 96-well plate between our hardware table and the tabletop. It is important to note that the robotic arm module is not essential, it is a module that we have developed for more and more demanding needs in the future, and it has a lot of potentials. Our frame module provides an effective space of 300mm x 200mm, enough to easily accommodate four 96-well plates. The frame module size is a trade-off between our regular experimental needs and the cost, and for most experiments, our frame modules are large enough to meet your needs (you can also build your own larger frame modules by following our open source manual). However, if you need to use more than four 96-well plates in a single experiment, then you will need the robotic arm module to remove the configured 96-well plates and replace them with new ones, thus achieving a theoretically unlimited number of operations in a single run and taking automation to a new level. The robotic arm module is not just for moving the 96-well plate; it can be seen as simulating the human arm. All the operations of the human arm he can try to simulate and replace with 6 degrees of freedom; it is a module that we have developed for more and higher needs in the future, with more possibilities.
Figure 15.Two views of the Robotic Arm Model
The arm module uses an independent STM32F103C8T6 high-performance microcontroller as the main control chip. The robotic arm achieves 6 degrees of freedom of motion by using two torque 35KG dual-axis PWM digital hollow cup servos, three torque 20KG dual-axis PWM digital hollow cup servos, and one torque 35KG single-axis PWM digital iron core servo, for a total of five power elements[13]. Using the servo as the power source, each servo is programmed to output a specific angle to allow the arm to pick and place the 96-hole plate length. The arm body is made of a lightweight aluminum alloy as the main connection material, which reduces the load on the servo and increases the accuracy of the movement. A double-layered base made of C45 was designed to solve the stability problems that arose during the initial commissioning process. In addition, we have designed mechanical jaws for 96-well plates, which are compatible with most of the mainstream models on the market and can achieve stable and firm clamping.
ased on the existing requirements, we designed our own mainboard for the servo, which can also be used for the robot arm module, as the master control chip is the same microcontroller as the head module. We designed the core board and the motherboard in a modular way with a total of two PCBs, implemented as a double stack[14]. The PCBs we have designed in this way support higher input voltages and can provide higher currents, have wider compatibility and stability, and extend the life of the hardware.
Figure 16.Pictures of our designed PCB master boards A) 2D view of the PCB; B) 3D model view C) Physical view
For the design of the PCB mainboard, we have also reserved the interface for future upgrades, reserving the physical pins of the sensing and detection module, which can be expanded and upgraded in the future according to demand. We support the addition of sensors for temperature and humidity, light intensity sensors, vibration sensors, fire alarms, and other external devices in the future。 in order to monitor more environmental and experimental data and to guarantee the normal conduct of experiments by building a complete detection and identification system. Next we will show some PCB design details.
Figure 17.Circuit schematic A
For the power management part, we chose to convert the 7.3V input voltage into a constant 5V voltage through the TD1583 power management chip to power the servo and the core board.
Figure 18.Circuit schematic B
In the part of signal output, we connect the pins of the core board output PWM signal to the main board through the row of pins, and the servo receives the PWM signal to rotate at a given angle to the specified position.
We also designed the step-down part by using the RT9193 voltage regulator IC component to convert the input 5V into a stable voltage of 3.3V to power the STM32F103C8T6 microcontroller for normal and stable operation.
Figure 19.Circuit schematic C
Figure 20.Circuit schematic D
Future Directions
- Produce a product design specification that guides how members of the open source community who want to participate should design and manufacture. And we provide quality and safety inspection standards to ensure that the project always has a high level of quality.
- Design and manufacture a removable enclosure to ensure less interference and contamination during device operation. And provide an enclosure with shading properties to accommodate a wider range of future experimental environments.
While our hardware sectioncom is prehensive and inexpensive, it does not currently meet all the requirements of the lab, such as PCR amplification and access to competent cells. In the future, we hope to be able to continuously upgrade our hardware to meet more needs.
References
- Amri, A. A. N., & Sumbodo, W. (2018). Perancangan 3D Printer Tipe Core XY Berbasis Fused Deposition Modeling (FDM) Menggunakan Software Autodesk Inventor 2015. Jurnal Dinamika Vokasional Teknik Mesin, 3(2), 110-115.
- Athani, V. V. (1997). Stepper motors: fundamentals, applications and design. New Age International.
- Hahn, B. D., Kim, Y., Ahn, C. W., Choi, J. J., Ryu, J., Kim, J. W., ... & Ma, B. (2016). Fabrication and characterization of aluminum nitride thick film coated on aluminum substrate for heat dissipation.Ceramics International, 42(16), 18141-18147.
- Lei, T. G., Calata, J. N., Ngo, K. D., & Lu, G. Q. (2009). Effects of large-temperature cycling range on direct bond aluminum substrate. IEEE Transactions on Device and Materials Reliability, 9(4), 563-568.
- Borase, R. P., Maghade, D. K., Sondkar, S. Y., & Pawar, S. N. (2021). A review of PID control, tuning methods and applications. International Journal of Dynamics and Control, 9(2), 818-827.
- Zribi, M., & Chiasson, J. (1991). Position control of a PM stepper motor by exact linearization. IEEE Transactions on automatic control, 36(5), 620-625.
- Wiki, F. G. (2012). Stepper Motor 5V 4-Phase 5-Wire & ULN2003 Driver Board for Arduino.
- Li, S. A., Weng, C. W., Chen, Y. H., Lo, C. H., Yang, M. H., Lin, Y. C., ... & Wong, C. C. (2012, November). Servo motor controller design for robotic manipulator. In 2012 International Symposium on Intelligent Signal Processing and Communications Systems (pp. 254-257). IEEE.
- Van de Straete, H. J., Degezelle, P., De Schutter, J., & Belmans, R. J. (1998). Servo motor selection criterion for mechatronic applications. IEEE/ASME Transactions on mechatronics, 3(1), 43-50.
- Jones, D. L. (2004). PCB design tutorial. June 29th, 3, 25.
- Stan.B,Carlo.T.(1998).A pixel dissimilarity measure that is insensitive to image sampling. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 20(4):401–406.
- Gunilla.B.(1986).Distance transformations in digital images. Computer vision, graphics, and image processing, 34(3):344–371.
- Rafael.C.G & others.(1987).Digital image fundamentals, digital imaging processing.
- Richard.H,Andrew.Z.(2003)Multiple view geometry in computer vision. Cambridge university press.