Logo fiat lux




  We developed a software to track the luminescence produced by bacteria directly in plants (a direct need for our project). Our desire was to generate an open-source code that is easier to use than existing standards, and accessible to all. It could be used as a basis for future innovation by iGEM teams. It has been validated by experimental work, and is well documented. It works on Linux, Windows, Mac, and is machine-independent. The whole code was written to make sure it can be embedded in new workflows. We have also developed a user-friendly interface to control the Raspberry Pi camera for our hardware. Both softwares are: open-source, cross-platform, interactive, user-friendly and above all, efficient.

The FIAT LUX app

Aim of the software

  The aim of our software is to allow scientists from all over the world to analyze images of plant infections by luminescent bacteria. Our software could also be used with other host models or even with Petri dishes. The idea was to develop a simple, user-friendly, accessible to all and completely free tool. Our software was created to address a direct practical need for our project. Indeed, we needed a simple software to analyze the images that we generated, during the infections of plants with FIAT LUX (bioluminescent bacteria). However, this is only an example of an application. Our code is completely open-source: it was created with the goal of allowing anyone to use and modify it to its convenience. Even though our software isn’t very complex in itself, it made a crucial difference to our project, as it allows us to connect the drylab and the wet lab even for non data analysts.

Our software is made of three distinct parts:

  • Importation of data
  • Treatment of data
  • Visualization

Figure 1 - Diagram describing the step of the software

Functioning - Implementation and benefits

  The code for our software was implemented thanks to the programming language Python. Even though the compilation time is longer than with other languages, Python seemed to be the most appropriate one, as it is better known and easier to handle than other languages. The implementation of our software was done using object-oriented programming: this allowed a better structure for our code. The creation of classes for the management of our objects facilitated treatment and the repeatability of the operations. To allow an optimal structure of our code, we created a main.py file that the user can launch with the help of a simple command in a terminal.
  The three parts of our code - importation, treatment and visualization - are stowed in folders that are named after modules in our main file.

   To allow interaction with the user, we chose to use Python’s standard graphic library, Tkinter, as well as Bokeh, a library that allows the creation of interactive visualizations for modern web browsers. Bokeh enables us to work with the help of a server that allows the tool to go online in a simpler way. As a matter of fact, as the server is based online, it means that in a laboratory you will only need one computer to process the analysis and calculations. By using Tkinter, we can manage transitions between operations, which is not the case with Bokeh. Thanks to the creation of a graphic interface, it is easier to ask the user to choose a folder or files, or to retrieve information to be used in other parts of the code.

  We chose to import our data as a video. The goal behind this is to facilitate the task for the user. Indeed, the user just needs to obtain a video of the infection throughout time, and the rest of the analysis is taken care of by our software. This also avoids problems linked to mismanagement of data at the start: by generating the images ourselves, we can be sure that the images are well tidied, thus guaranteeing good functioning of the software.

  Finally, the last choice concerns the management of data after treatment. Indeed, the user can visualize the data thanks to our software. But what happens to the data after the software has been closed? We have decided that the user should export the data before closing the software, or else data would be deleted. Indeed, it is more efficient from a memory space, simplicity and speed of treatment point of view, to manage freshly imported data.

  The choices we made guarantee the simplicity of our software in different contexts and for different applications.

Example of usage

  In order to test the different functionalities of our software, we used an experiment we did in the wet lab: infection of chicory leaves by bioluminescent engineered bacteria thanks to FIAT LUX. We recorded a video showing the propagation of the bacteria in our plants throughout time. This proof of concept would help us demonstrate the functioning of our software. It is important to note that the images we received arrived quite later on in our project, thus limiting the possibility of efficiently testing our software. Certain aspects of our software could be improved by future iGEM teams. They are described below.

Importation of data

  We chose to import data through a video, made up of a succession of normal images (with LED lights on) and luminescent images, throughout time. This video is then cut into folders corresponding to different points in time. Each folder contains a normal image and a luminescent image. Once this video is imported, it is necessary to indicate information concerning point of infection and the scale correspondence between pixels and centimeters. It is also possible to ask to crop parts of the images, in the case where multiple plants or Petri dishes are pictured. The selection of the video and information concerning order of the images are done via the Tkinter interface. The cropping of images, selection of infection location and scaling is done with the web app Bokeh. Good and expected performances of our software can be observed on Figure 2.

Figure 2 - Screenshot with the different folders and screenshot of Bokeh with the three types of information to give (location of infection, scale, cropping)

Data treatment

  Once all of the data is imported, the data needs to be treated to interpret the visual results of the experiment. This starts by computing the percentage of infected areas on a plant throughout time. To do so, we need to spot the plant within the image and compute its surface. The same computation is done with the infected part (i.e. the luminescent part). With these two elements, it is possible to compute the percentage of infected plant throughout time. This step is described with our proof of concept in the Figure 3.

Figure 3 - Left: segmentation of the leaf. Right: screenshot of the text file containing the stored information for the first points in time

  To continue this idea, we wanted to compute the percentage of infected plant by categories of intensity of luminescence. We thus generated a code allowing us to localize the pixels corresponding to each intensity and computed the infection percentage for each of these intensities, so for each bacterial concentration (assuming that a given intensity corresponds to a given constant number of bacteria). It is also possible for the software to compute the surfaces of each intensity zone. Unfortunately, we do not know the actual correlation between luminescence intensity and bacterial concentration. Our proof of concept enables us to visualize this treatment step in the following Figure 4.

Figure 4a - Segmentation for different intensities.

Figure 4b - Csv file with different points throughout time

  Our software also enables us to do other tasks such as the generation of an image made up of the normal image and luminescent image. It is also possible to merge both images to visually see the evolution of bacterial propagation. We have also installed a function enabling us to compute the speed of propagation of the bacteria. However, we did not have the time to fully implement an efficient version of this function, as it considerably slowed down the treatment time. Therefore, we have decided to keep this function but not use it.

Visualization of the results

  The goal of the last part of our software is to allow the user to visualize treated data. A Tkinter tab allows us to open a window for the visualization done by Bokeh. This window enables us to visualize the results of the data processing. The different parts of this window allow the user to do different tasks (explained in more details in the tutorial part). The main idea is to visualize, as a function of time and position within our plant, the intensity of the luminescence and thus indirectly the concentration of bacteria.


Installation and launch

Introduction to the use of a Command Line Interface:

  A command line interface - more commonly known as a terminal - consists of a graphic interface in which the user can communicate via text command lines directly with the computer. The user enters a command line into the terminal to perform an operation and the machine displays a response with the requested operation (if typed without error).

  In order to install and use our software, it is necessary to use a terminal. Don't panic, nothing complicated! Just a few very simple commands to use. To get more information, you can read: Mozilla comand line help.

Open a terminal according to the Operating System (OS):

  • Linux: Search and open "Terminal".
  • Windows: Search and open “Windows Powershell”
  • MacOS: Search and open “Terminal”
  • A window of this type will open depending on your OS:

    Figure 5 - The last visible line (C: \Users\Administrator\Destop) indicates the folder where you are in your computer, it corresponds to what is called the path. The ">" element indicates that you can write.

    In our case, you will only need to use 2 or 3 common terminal commands:

  • If you want to enter a folder, you can write: cd path/folder
  • If you want to go to the parent folder, you can write: cd ..
  • If you want to display the content of your folder, you can write: ls
  • If you want to see your current location, you can write: pwd
  • When you are looking for the name of a folder/path/file, you can use the tab key on the keyboard; it will display the proposals more quickly ;)
  • Here’s an example:

    Figure 6 - In this example, we are initially in the GitHub folder. This folder contains two folders that we have listed with the ls command: Fiat_Lux_iGEM and iGEM-Images. Using the cd Fiat_Lux_iGem command, we enter this folder. As you can see from the red boxed line, we have changed folders. We then ask for the contents of the folder to be displayed again using the ls command. Finally, using the cd .. command, we return to the GitHub folder.

    Before installing our software, choose the folder where you want to store it ;)


  • Make sure you have Python on your computer:
    • Windows: type py in your terminal. If you have python installed, this command line will tell you what version you have. You must have at least version 3.8. Then do crtl+z (^Z will appear) and hit the enter key. You will be able to write a classic command line ">".
    • Linux/MacOS: type python --version in your terminal. If you have python installed, this command line will tell you what version you have. You must have at least version 3.8. Then do crtl+z (^Z will appear). You will be able to write a classic command line ">".
    • If this is not the case, download python through the following link: https://www.python.org/downloads/ ( 3.8 or more).
  • Install git:
  • Have the good version of pip:
    • Windows: type py -m pip install –upgrade pip in your terminal
    • Linux/MacOS: type pip install --upgrade pip in your terminal


  • Access to the code:
  • Installation of libraries:
    • Go into folder insa-lyon1/FIAT_LUX_APP with cd command line.
    • Make sure you are in the right file containing requirements.txt with ls command line. If not, go to the right file :)
    • Enter in the terminal
      • Windows: type py -m pip install -r requirements.txt in your terminal
      • Linux/MacOS: type pip install -r requirements.txt in your terminal

    Launching the software:

    In order to launch the software, you will need to proceed as described:

  • Go into the file containing main.py from the terminal (FIAT_LUX_APP → Code → main.py)
  • Type into the terminal:
    • Windows: py main.py
    • Linux/MacOS: python3 main.py
  • Well done, the software is launched :) Follow the information given by the terminal to track the data processing’s evolution!
  • Homepage:

    Once the command line has been submitted in the terminal, the software opens the homepage. Two tabs are present on this page: Data (management of the data) and Visualization (allows the user to visualize the data).

    Data tab:

    The user can choose to import the data, crop already-imported data or export data after treatment.

    New tab:

    When using this submenu, a window opens asking the user if he wants to import previously treated data by the software, or untreated data.

    Case n°1: Untreated data

    Selection of the video:

    The user is asked to select a video of his choice and indicate the order of the images (luminescent/normal or normal/luminescent). The video must be an .avi or .mp4 file.

    Once the video has been selected, it is cut to generate folders, corresponding to the different points in time and containing two types of images (luminescence and normal). Dynamic monitoring is possible thanks to the display in the terminal.

    Retrievement of information for the treatment:

    A Bokeh window is then opened to enable the user to enter the necessary information for the treatment. To validate the launch of the treatment, the user needs to give different pieces of information. The user must first choose the data he wants to treat thanks to the drop-down list “image”. Once the choice has been made, the image appears in the graph A.

    Then, the user can proceed to the subcutting of the image in the case of replicates, or to a simple crop of the image. The idea is to keep only one chicory leaf for the treatment, for example. To proceed to the cutting, it is necessary to select the small dotted rectangle. Once this is done, the cropped image can be seen on the right.

  • On graph A, the user needs to provide information regarding the scale between pixels and centimeters. He needs to click on two different points on the image. If he wants to change the entry, he must double click on the graph and start over.
  • On graph B, the user must click on the image to indicate the location of the infection point: a red dot appears. Again, if he wishes to change the entry, he must double click on the graph and start over.
  • Finally, the user needs to give a name to the folder, in the appropriate space on the bottom left part of the screen, next to the “save” button.
  • The user must not leave this page without having clicked on “save” or “quit”.

    After having saved the data or closed the window, the treatment of the data has been launched.


    The user can then wait until the treatment is done. During this phase, the user can’t interact with the software. A succession of images is displayed in the terminal, so the user can follow the different phases of the treatment. The sentence “saved start processing” indicates to the user that the treatment has indeed been launched.

    The software starts by retrieving the images associated to the different moments in time, to start the treatment.

    A text file is then created, to be able to save the following information for each moment in time: surface of the plant, infected surface of the plant (corresponding to the luminescent surface) and the percentage between infected surface and total surface of the plant.

    The next step includes the generation of an image composed of the two image types side-to-side for each moment in time. This image is called “vizualization.jpg”.

    The algorithm then superposes both images. This image is called “sup.jpg”.

    The last two steps of the treatment include the identification of the different intensity zones. Two .csv files are generated and will be used for the visualization.

    Case n°2: Previously imported data

    The goal here is for the user to avoid having to go through another treatment phase, if he simply wants to visualize the data. However, he should be careful to provide a folder completely identical to the one he previously exported.

    Cut your data:

    The goal of this subtab is to give the opportunity to the user to return to the previous step: Retrievement of information for the treatment. This way, he can proceed to a different treatment on data that has already been imported. The user will then be sent to the window for the cropping of the images.

    Export data:

    This subtab allows data to be exported. As previously explained, if the user wishes to save data before closing the software, he needs to export it. Therefore, he is given the chance to export the data and choose where to export it to.

    Visualization tab:

    After having imported data in the software, it is possible to visualize it. The user must select the submenu Visualization of the Visualization tab.

    A web page opens with different windows (explained below). First, the user needs to select the experiment he or she wishes to visualize with the drop-menu on the top left.

    Three sliders can be seen, and they all have a different role:

  • Height to control the green visualization line of the image, vertically
  • Time to navigate across time
  • Gap to control the thickness of the visualization
  • The graph B corresponds to a graph called “% zone”, presenting the percentage of each intensity group with respect to the whole leaf. A vertical dotted line represents the time step that is visualized on graph D (currently at time 0).

    If the user wishes to visualize the intensity of a precise area of the plant, there are multiple possibilities.

    Move the “Height” slider: a green dotted line is visible on graph D and can slide vertically according to the slider. The graph C presents the intensity of each pixel along the green line.

    If the user wishes to visualize this intensity according to a certain thickness of pixels, he can do so with the “gap” slider.

    Finally, the user can choose to manually define the line of pixels desired to visualize if a horizontal line isn’t suitable. To do so, the user can click on two different locations on graph D. A pink dotted line appears and the intensity can be found on the graph A. To redefine a new line, the user needs to double click to start over.

    The graph A can be used to determine the intensity. Indeed, a user can pinpoint a particular point on this graph, such as the maximum, and a pink dot will appear on the graph D.

    After having explained the general functioning of the software and a few key points, we invite you to watch this video demonstrating the functioning of the software in another manner.

    In the case of encountering a problem with the software, please don’t hesitate to contact Manon Aubert (manon.aubert@insa-lyon.fr) or Théo Mathieu (theo.mathieu@insa-lyon.fr).

    Camera piloting software

    Aim of the software

      This software was developed after exchanges with biologists, as they pointed out that having to use a Raspberry pi could discourage the use of our hardware. FIAT LUX camera control has a user-friendly interface, coded using Python. Its goal is to simplify the capturing of images using the Raspberry pi and its photo module. The user can define two different types of photos with different parameters and launch a capture with a defined time interval between both shots. As this software has been developed for our hardware, it also allows users to define the intensity of the LED matrix linked to the Raspberry, and the reading of the temperature as the photo is being taken.

    Back-end development

      The programming language we used is Python, as the numerous libraries that are provided and the simplicity of usage facilitate modifications by future users. The Tkinter library is used for the user interface, and the libcamera-still library is used for the image captures.

      The code was developed in an object-oriented manner, which simplifies the increase/decrease of the number of images per sequence, as well as the number and type of parameters of the image. It would also be possible to add a ventilator or other LEDs, which will be simple additional parameters for the sequence.

      In the case where the user wants to take nocturnal images, the gain of red and blue pixels are fixed at 1, to allow higher exposition values. The ISO corresponds to the image gain and allows us to artificially increase the intensity of light received by the sensor. We recommend users to not go above 500 ISO, or the background noise will be very high.

    Utility of this software

      Even though this software was developed to accompany our hardware, it was designed in a manner that would allow anyone to adapt it to another usage. For example, it could be used to do a time lapse on plant or bacterial growth. FIAT LUX camera control is the bridge filling the gap between the user and the command line. We have developed this software in just one week from the moment we received the material (HQ camera). Even though we only manage to code a simple primitive version of what we envisioned, this allowed the biologists in our lab to consider the multiple possibilities of such a device. They had no idea what microcontroller could bring to the world of biology. This tool is, as we said, is more than afordable, and we truly believe that it is with initiatives such as FIAT LUX camera control that the technology-biology alliance will be able to be democratized and flourish.


      This tutorial demonstrates how to use the camera piloting software to take a lighted photo and a luminescent photo. After our first tests on bacteria containing our plasmid FIAT LUX spread on Petri dishes, an ISO of 5 and an exposition time of 180 seconds was sufficient. For an in situ usage, we recommend using an ISO close to 400, and an exposition time of 230 seconds. For static observations, it is important to favor a small ISO with a high exposition time, rather than a high ISO and short exposition time. The delay between two shots should be at least twice as high as the longest exposition time, to leave time for the Raspberry to process the image.



    If you need technical support look here.

    This software is only usable with Raspberry.

  • Check that Raspberry pi is updated.
  • Install Python (should be preinstalled)
  • Download:

  • git clone https://gitlab.igem.org/2022/software-tools/insa-lyon1.git
  • Or download folder from https://gitlab.igem.org/2022/software-tools/insa-lyon1.git
  • Place folder FIAT_LUX_CAMERA wherever you want
  • Open the terminal and move inside this folder
  • Install necessary libraries:
    • Make sure that you are inside the folder containing “requirements.txt” using the ls command. If not, move to this folder
    • pip install -upgrade pip3
    • Type in your terminal: pip3 install -r requirements.txt

    To activate the camera port on the Raspberry pi, please refer to official documentation, as it depends on the Operating System version you are using. The Libcamera-still library is usually present on all Raspberry pi. If not, please update the OS, or check the documentation: https://www.raspberrypi.com/documentation/accessories/camera.html

    For the next steps, please follow the instructions in the images below:

    Screenshot 1 - Place yourself in the right directory and type python3 interface.py. You can also double click on the file and select “open with a terminal”.

    Screenshot 2 - A: name of the experiment. B: time between two sequences. C: name of the CSV file in which the temperature and humidity data will be saved.

    Screenshot 3 - Selection of the folder where the experience is saved.

    Screenshot 4 - Slider to determine the light intensity of the LED during the live preview.

    Screenshot 5 - After having clicked on the livestream preview, this window opens. It allows users to directly set the adjustment, if this is possible with the chosen lens. Close the preview to return to the app.

    Screenshot 6 - Setting of the exposure times and the intensity of the LEDs for the normal photos and click on the Normal preview button to launch the shot.

    Screenshot 7 -Once the photo has been taken (depending on the exposition time), a window opens. If you wish to keep this photo, you will need to save it, as it will be deleted during the next preview. Repeat this step for the preview on the second type of photo.

    Screenshot 8 - Once the sequence has been launched (by clicking on the launch button), wait until all the images have been taken (= number of shots * time between 2 shots). A folder has been created with the name of the experiment in the chosen path.

    Screenshot 9 - This folder is composed of multiple subfolders, corresponding to each point in time when a photo was taken, and to a CSV document which entails the temperature and humidity information for each time.

    Possible problems:

    If all the elements aren’t displayed correctly, please try and expand the window. For all other problems, don’t hesitate and contact Théo Mathieu: theo.mathieu@insa-lyon.fr

    Future improvements

    This software is very basic, yet greatly simplifies the use of our hardware, and the piloting of the camera with a Raspberry pi. However, some improvements could be done by future iGEM teams:

  • To give the choice to the user of the number of image types per sequence, as well as their names
  • To give the possibility to the user of exporting images under a different format (video, or all in the same directory, etc.)
  • To allow users to change some parameters during the shots
  • To add an algorithm to reduce background noise, if it is not possible to avoid it during the shot
  • To allow image cropping or the selection of a region of interest.