Integrated Human Practices

Every research project is, in some way, always “integrated”: at the very least, into the current research context (the “state-of-the-art"), as previously published papers do affect it to a large extent. It is often also integrated into other, unrelated research, as the people working on the project exchange their ideas and results with colleagues from the same or other groups, locally and internationally. For an iGEM project, there are additionally two specific aspects of integration, which might be missing from a basic science research project:

  1. 1. integration into the industrial landscape
  2. 2. integration into society.

Beyond carefully reading the literature to integrate our project into the state-of-the-art and daily discussions with all team members and supervisors, before and during the time we performed our project we actively sought the feedback from experts in the field, industrial leaders and the lay public. The input we received helped us shape the project, guiding us towards certain experiments or ways to perform them. Alongside asking for feedback on the specific topics of our project, we decided to involve the above-mentioned stakeholders into a discussion about research integrity, something we got very soon fascinated by from the very beginning. All in all, regardless of the specific subject under study, the way in which the research is performed, and the data analyzed and presented, is crucial for everybody else to believe in the claims.

Professor Anthony Forster

After having met Professor Anthony C. Forster at the iGEM European Meetup in Hamburg we found great interest in his work and stand on science. Forster is the university chair in Chemical Biology at the Uppsala University. We were quite interested in his expertise regarding the incorporation of non-canonical amino acids (ncAAs). Thus, we asked him for an interview. The interview was held on the 8th of July 2022.
While we are working on the in vivo incorporation of non-canonical amino acids in bacterial microcompartments, Forster gave us tips on working with ncAAs. He worked with the in vitro incorporation in peptidomimetics and not orthogonally-working synthetases and tRNAs. He started with telling us that ncAA methods can be pretty tricky. He advised us to focus on just a few amino acids, since it is already difficult to get some to work, and they can differ greatly in their functions.

We also talked about which unnatural amino acids he has already worked with and whether these could also be interesting for us, for example methyl-serine or allyl-glycine. He suggested that we look at various works by Professor Peter G. Schultz, as he has done a lot of work in this field.

He told us that one could use directed evolution of libraries for the incorporation of more amino acids or finding interesting positions for the incorporation in target proteins. Unfortunately, this is associated with a considerable investment of time, as one does not know how long it would take to find useful changes in the constellation or interesting positions.

We thank Prof. Forster for the interview. It was very inspiring to get another opinion on non-canonical amino acids!

It all started when we met Cansu Tekin, the Innovation Manager, at that time OriginBio. We talked about our project of producing Indigo inside of bacteria and about the company OriginBio, which specialized in the upscaling of bioproduction.  We agreed to an interview to talk about how we could design our project to make it easier to upscale.  Unfortunately, later on, we couldn’t meet up with her, but we were extremely lucky to be able to meet up with Max Mundt, who is in charge of Vp Business Development at OriginBio. Insempra/BioOrigin is a company that is specialized in upscaling bioproduction of chemicals, for example lipids, to enforce more bioproduction in the industry.

The talk was held on 06.09.2022 over Zoom and we started to talk about what OriginBio does and what kind of bioproduction they are already using. He told us about lipid production in yeast, which is already upscaled, and how instead of going with these lipids on the food market it would get way more value in the cosmetic market, as bioproduction the efficiency and low-cost factor of plant-produced oils, but are regarded as a luxury item in the cosmetics industry, where the low-cost factor doesn‘t have much of an impact. 

Then we started to talk about our project and how one would normally start to upscale bioproduction. First, one would start at a small scale by comparing different strains of bacteria and studying which produces the most at the lowest cost. The most promising are picked and upscaled to bigger volumes. Bacteria behave differently in large volumes. The strain which worked the best on the small scale is not always the best in a bioreactor. There would be factors such as metabolic stress and way more deviation cycles which will generate problems for the bacteria in a bioreactor. The deviation cycles would state a problem as with increasing amounts; the bacteria would try to „push out“ the factors not needed for survival, including the product of interest. Hence it is important to avoid stressing the bacteria as much as possible to let them keep those genes necessary for the production of the desired product. 

That‘s where our idea of compartmentalization of the bacteria came into play. With the proximity of the enzymes, the same efficiency could be achieved using less enzymes . In that case,  the metabolic burden for those bacteria would decrease and the likelihood of the development of less producing subcultures in the bioreactor would decrease. 

We were encouraged to  make a fed-batch. With that, we could simulate the number of deviations our bacteria would go through in a bioreactor and investigate whether  our bacterial  production decreases. Another suggestion was to try a FACS measurement and search forsubcultures of less-producing bacteria over time inside our fed-batch.

The next topic in our interview was  the use of genome-reduced strains. We talked about the problem of slow growth and bad stress response, which would naturally come with the deletion of certain genes. But we also talked about the possibility of reducing the metabolic burden, due to overall lower protein production.  Max Mundt was also very interested in this idea as he never heard of it being used in the industry. 

We further discussed some models for our production that could be interesting, as well as some numbers of how cost-efficient our production would be. General numbers to show how much our compartmentalization would improve the production out of the viewpoint of the industry.


With this talk, we couldn‘t just get insights on how the industry works with upscaling bioproduction, but also get some new ideas on which experiments are necessary to show that our project improves bioproduction. We also learned which possibilities our project had in terms of making upscaling easier and more efficient. Furthermore we discussed the use of genome-reduced strains and their future possibilities. All in all, a very eye-opening talk and an important learning experience.

As a part of participating at the fashion days (for full text visit the Communitation and Education page) we wanted to get the opinion from the general public on the concept of producing indigo using genetically modified bacteria. Therefore we did a survey with the people at our booth. Overall, 196 people have participated in this survey and we have received incredibly interesting insights! The questions were:

  1. How many jeans do you have at home?
  2. Is it important for you, that all components of your jeans emerge from sustainable production
  3. Would you wear clothes, that have been made of materials produced in bacteria?
  4. Would you trust products emerging from genetically modified organisms (GMOs)
  5. Do you have concerns/fears regarding the production of goods through bacteria?

After evaluating our survey, we have realized two striking things. First, people have some concerns regarding GMOs and bacterial production. But, the majority of fears stems from misconceptions and lack of knowledge about the methods of synthetic biology (for example: multiple people have voiced the concern that they fear, that using products from bacterial production would make them sick, as they have this very strong, although incorrect, association that bacteria in general make them sick!). Secondly, we could see that an increased number of people had issues making judgements for questions 4 and 5, and many of them directly wrote down on the surveys, that they need (and also want) to know more about this topic to make a definite judgement.From this survey, we have learned the importance of science communication. Generally, our survey showed that a majority of the participants had no concerns regarding production in bacteria nor wearing chlotes with compounds produced in bacteria. However, we found that there was a skepsis towards genetically modified organisms. As production of chemical compounds in bacteria most likely would involve genetical modification this points to the fact that people are lacking information on this topic. Through the following conversations with people, they were often more positive towards GMO when explained properly. This could be caused by a lack of understandable information explaining the basic concepts behind synthetic biology. This is the exact point where we as upcoming scientists have to step in! It is our responibility, today more than ever, to present our research in a scientifically correct, yet understandable way. As our main focus was not industrial production we can learn from this survey that even as a foundational advance team we should make an effort to explain the basic concepts which we are working on to the public. In addition to presenting our results as clear and transparent on the wiki, we also contacted a science communicator to learn more on how we best present our data (see interview).

Figure 1: Survey results.Results showed that 13% of the participants have concerns about bacteria producing goods, however, a larger percentile (21%) would not trust products emerging from GMOs. This points to a contradiction as most likely bacteria producing goods are GMOs.

Figure 2: Survey results. Based on the results, people are in general positive towards bacteria producing compounds for chlothes. Furthermore, sustainability for the costumers.

In research trust in the work of others has been essential. Being able to rely on other people’s results for your own further research allows research to develop faster and is key in any research community. Research integrity has become increasingly important as the tempo in research increases, the pressure to publish challenges the good practice and furthermore as the world around us is getting more used to bendable truths. Therefore, research should be conducted in a way where results can be trusted. However, one survey found that around 1,97 % of researchers have been involved in misconduct [1]. Studies point to multiple factors, which have a negative influence on research integrity. These include pressure to publish, use of journal impact factor and how researchers are assessed for promotion [2]. At the beginning of our iGEM project we therefore decided to learn more about how we as a team could ensure that the results, we produced in the lab, could be understood, trusted and repeated by other teams in the future. We wanted to spread awareness on the topic as well as educating ourselves along the way.

By following our journey and the different interviews you can learn more on research integrity. Click on the names to learn more!

We started this process by interviewing people with expertise in different fields to get a better understanding of the topic. We hosted a research integrity seminar for iGEM teams led by EMBO associate, Michele Garfinkel. Then we decided to perform our own survey to gain further understanding and spread awareness of research integrity. We interviewed Insempra to gain a realistic point of view on the implimentation of our project. Lastly, we summarized our learnings in a “Guideline to Research Integrity in iGEM” document for future iGEM teams.

Research Integrity Guidelines

First of all, we wanted to educate ourselves on the topic. This included literature reading and getting familiar with some of the questions and challenges that you can expect face as a researcher.

While research integrity is surely a personal matter, the whole scientific community plays a great role as well in defining and ensuring integrity. Therefore, we decided that our efforts should not only include our own team, but the whole iGEM community. We reached out to EMBO (European Molecular Biology Organisation) for collaborating with us to create a seminar for other interested iGEM teams. For this we contacted Michele Garfinkel who is the Head of the Policy Programme at EMBO and her work focuses on biotechnology, responsible conduct of research, and open science. In the seminar the participants were introduced to research integrity, how to define misconduct and principles of responsibility. The seminar was followed by a discussion session where the participants got a chance to ask follow-up questions.

Take away messages

  • What is means to be responsible and work with integrity depends on the community you are in. How things are done might vary from place to place and you therefore need to learn which good practices applies in your field/community.
  • Responsible conduct of research is not a moral state, but something you have to learn
  • Factors such as high competition, limited funding or short-term funding, pressure to publish, can lead to a negative environment.
  • For younger scientists the importance of declaring conflicts of interest is often not clear
  • It is important to come together in the scientific community to discuss how to build a good community because changing a culture requires changing the system behind it.

We were happy that we had iGEM participants joining from many different countries to learn about research integrity!


To get even further into the topic and get a better idea about how Freiburg University adresses the topic, we conducted interviews with several professors.

As Anthony Forster gave a statement at the Hamburg meetup that “iGEM is not real science” and that some project reports are disingenuous about their achievements. We also were curious to get to know the arguments behind, especially as we were interested in research integrity ourselves. Forster said that the “real science” part about the project depends on how well the project and experiments are documented. Unfortunately, little attention is paid by iGEM itself on to how to create good documentation of experimental data and results, he mentioned. That begs the question, why is this the case? The teams are equipped with almost unlimited storage space on their websites, and the more documentation you have, the better it is , Forster pointed out. Thus, the issue lies elsewhere.

iGEM teams need to learn how to create good documentation for their results and experiments. Forster suggested this could be achieved by a faculty member or institution approving the wiki before it is frozen, as most of the projects are to some extent related to research at the faculty or institution. Otherwise, other experts in the field would have to approve the wiki, Forster proposed, but unpaid peer review for iGEM projects does not appear to be feasible. Other iGEM teams might review wikis before they are frozen, but these teams are most likely not experts in the field that the other team has researched, he stated.

Another way to improve documentation would be to push for publications of the iGEM projects, Forster noted. This is supported by the iGEM Foundation itself as seen for example in the “Special Issue on the 2019 and 2020 iGEM proceedings” published by the journal Synthetic and Systems Biotechnology. That an iGEM project ends in a publication is generally appreciated, but whether this is really a feasible goal for every team is questionable. High publication costs and further experiments without access to laboratories could put an end to these plans.

Something that could lead to good documentation would be to encourage new teams to continue working with projects from former teams, he proposed. There are BioBricks from these former teams which could become more accessible with even better documentation. Subsequently, we suggest good documentation of your experiments and results could become a bronze or silver criterion, which could lead to a suitable change.

One thought we as a team genuinely agreed on with Forster was his mention that there is no real “gold medal”. Every competition has the freedom to set its own rules. However, to call something a gold medal, as it is called in other competitions such as the Olympic Games, implies that this gold medal is the first place in the discipline. These terms set false standards for competition and what has been achieved if you are not well acquainted with it.

This could lead to a culture of deception, wherein there is a tendency to present great conclusions regardless of whether or not they fit one's experimental data, Forster reckons. Therefore, a rebranding of the “Medal” system is in need. We second Forster's proposal to rename it "Bronze/Silver/Gold Standard".

At the moment, a title as well as an abstract must be provided on the team lists website by 30th of September. This abstract should be presented on the team’s main page , Forster suggested. As this page is the first point of contact for potentially interested parties, it should also provide a summary of the project at a quick glance. These main pages are very important, so they are mostly filled with animations, impressive structures, and illustrations. It is easy to get lost without quite understanding what the team really achieved.

In conclusion, greater focus should be placed on true science itself. Simple implementations to achieve this goal are the mentioned “Medal” criteria for the documentation, the rebranding of the “Medals” into something that isn’t misleading, and the addition of an abstract on the wiki main page.

Michal Rössler works as a science communicator at the CIBSS Centre for Integrative Biological Signaling Studies in Freiburg. In her position, she is responsible for the content and editorial work of research groups that belong to the CIBSS. Furthermore, she is a scientific illustrator and is passionate about visual science communication. With this interview, we wanted to get to know her point of view on how to address issues regarding research integrity and fake science. 

Important insights: during this interview we talked about the roles of a science communicator and the importance of displaying new findings and ideas in a comprehensible way, suited for the target audience, in a transparent way. 

" [...] Good scientific practice [for me] is the process of generating scientific findings and communicating these findings (…) transparently -Michal Rössler

We discussed how, ideally, a suspected fraud, incorrect data representation or modification of results can be handled from the role of a science communicator as well as from the position of a co-author of the scientific publication in question, and what is expected from the publishing author in such a scenario.   

" [...] it could also be an honest mistake, and I, therefore, think it's very important to openly talk about this [potentially wrong data] during the publication process. -Michal Rössler

We discussed the impact of fabricated or fake science in our everyday lives, and how much of the scientific results we encounter every day might be fake. Michal Rössler thinks that while not many labs or institutions willingly make up results and publish fake data, there are many misconceptions spread on the internet and especially on social media, undermining the importance of critical thinking when encountering any scientific claims from non-scientific sources.  

" [...] Of course, you always need to fact-check yourself. When I'm reading something about scientific findings, I ask myself: Who published this? Is this logical? Is this produced by like an authentic institution or a scientist? Or is this just made-up basically? -Michal Rössler

Finally, we talked about the importance of science communication, especially in times of the Covid19 pandemic, and the lack of trust in the population in scientific results and school medicine, discussing how people like to believe “easy answers”, which can barely ever be provided by scientists, which usually provide more “complex answers” to many questions. We discussed on how to tailor science communication to broad population without over-simplifying our findings.  

" [...] Sometimes you need to use [more complex terms] because they are a lot more specific. And also, to encourage the people to understand them. Maybe to make further research, to further read about this topic -Michal Rössler

Full interview:  

Michal Science Communicator 

Nikita Edel

We always start with the same question, and that is: What is good scientific practice for you? What's your interpretation of good scientific practice in general?

Michal Rössler

So, to me, it means especially that the process of generating scientific findings and communicating them is transparent. This means that it can be recapitulated by other people, and that it is guided by the will to get the best or ‘true’ scientific findings, and not by personal or financial interests. That includes transparency about the methods that were used to generate the findings, and to be accountable for mistakes, which can, of course, happen. So, for me, that is kind of what research integrity means.

Nikita Edel

Could you explain what a science communicator does?

Michal Rössler

I think that everybody talking about science is something of a science communicator. In my case, as a professional science communicator who is employed at the university, one of my main tasks is to help scientists disseminate their findings, either by supporting them in their own endeavors, writing texts myself, creating illustrations, establishing contacts with journalists, or by organizing events. That helps reach a wider audience for the scientific findings. 

Nikita Edel

The next question is also a question we do in every interview. That's a dilemma question. 

There is a situation described: After a couple of rounds of reviews, I, as a scientist, discovered an error omission in the data analysis in a co-authored manuscript. At this point, the paper has almost been accepted and the reviewers have never made any remarks about the data analysis. I know that my co-authors do not want to miss out on a chance to publish. I was not the prime person responsible for this part of the data analysis. What would you do if you would be in this situation? Or had contact with someone in this situation?

Michal Rössler

Science should always aim at producing the results that are most likely correct. It should not be the goal to get the most publications or to publish in the highest impact journals, or to act out of other personal interests – even if it might be hard to always live by these standards. The coauthor who is described in the situation should definitely indicate this to the other authors of the study and have it corrected, even if they did not lead the study. I hope this is the answer you got from all interview partners.

Nikita Edel

It's still interesting because people deal differently with such situations.

Michal Rössler

Of course, the way funding often is organized can lead to an outside pressure to publish high impact and to publish a lot. The described error is something that could push back the entire study, and no researcher would be happy about that to happen. But errors occur, and then you need to be open about it. 

Nikita Edel

Then the next question is: do you check the information or the science you are communicating on for any kind of falsification or fabrication of the data?

Michal Rössler

We do check as far as it far as it is possible for us as science communicators. If something seemed strange to me, I would not hesitate to point that out, but so far that never happened for me. 

Nikita Edel

So, if you look at the public, do you have the feeling that there’s a lot of fake science or fake scientific information in the world? Maybe looking at social media or news or something like that. Or do you think the most information regarding science is correct? 

Michal Rössler

It depends on what you mean by ‘fake science’. If you mean actual, scientific findings from researchers or institutions that are falsified, I actually don’t think there is very much of that. Still, even anything is too much. But, on the other hand, there is a lot of false information about science on social media that is generated by other players who are no scientists or actual experts in the field. If you include that, I would say there is a lot of false information about science. That means that you always need to check for yourself when reading articles or postings about science: Who published this? Where is this published? Is this plausible? Is it by an authentic institution or scientist?

Nikita Edel

No, we meant falsified science, because, for example, in the background of the recent Alzheimer’s problems there are still thoughts and ideas which are made up. For example, do you think, looking right now in the light of the corona pandemic, that the way science is communicated to the general public is wrong or should be done differently? For example, on TV, because there were these problems that people just didn't believe in science even though there was a channel where someone tried to communicate. They just didn't believe it. So, do you think something should be made differently?

Michal Rössler

I think there were a lot of very good communication efforts, but also communication efforts that weren´t so helpful. Good scientific communication should not only state findings but also say how such findings are created – explain the background. And, ideally, it should also get into a dialogue with people, so they can ask questions or say what their worries are about a certain topic. During the beginning of the pandemic, a problem was that the communicated scientific findings changed a lot, as the knowledge about the virus increased. So, people weren't so sure what the consensus was. That lead to a lot of insecurity and doubt, which stayed even when more data was available.

Nikita Edel

It's not just that people didn't believe in science, but they started to believe in non-scientific information. Do you have a thought on why people so easily believed that information? Normally you would say, OK, that’s an institute, I believe that institute. But they started to believe in people who don’t have any scientific background. Do you have a thought about why they start to believe such persons?

Michal Rössler

I'm no expert in that field, I can only speculate. I think that many did not only now start to believe in these people you mention, but were already a bit skeptical about science or school medicine before the pandemic. So it might have increased a tendency that was already there. Also, many of the people spreading false information and conspiracy theories were very loud and provided very easy answers. Scientist in scientific institutions did not give such easy answers. They would say things such as ‘evidence indicates’, or ‘there are different possibilities’. And that is something intrinsic to science – it can hardly ever provide such easy answers. That is why it is so important to also speak about how the scientific process works when communicating about science.

Nikita Edel

To look at your work. For example, writing understandable texts for people who are not scientists. Do you look at something specifically so that the text is still correct but easier to understand?

Michal Rössler

Yes, for example it is important to not use too many scientific terms, and to explain those that the text cannot go without. Ideally, this is done by giving examples from everyday life. Many of the texts I write are actually aimed at science journalists. For them I can go more into depth, and will also always provide a link to the original research and contact possibilities.  

Nikita Edel

Do you think that visual things like pictures or graphs help to communicate science?

Michal Rössler

Yes, definitely! But just as with texts, it is important to use them carefully and think of the audience: The graphics used in internal communication between scientists are usually for representing data or processes. In other contexts, they can also to help raise interest, for example a beautiful microscopic picture. So they can help in different ways: decorate or give information, or both. But even for images that are only used to decorate, I still find it important to tell the viewer what it is depicted. Is this a microscopic picture? How was it generated? Is this real or an artistic interpretation? Especially with realistic-looking 3D artistic renderings, I find it really important to say ‘this is not what it actually looks like, but this is an illustration that shows how we can imagine it.’ 

Nikita Edel

If we look for example into advertisement and there is a graph and it is somehow correct, but the scale is made different that it looks a specific way. Do you think that that should be more transparent or that people should be more educated about how to interpret the information the right way?

Michal Rössler

Yes, and I think that is something that applies to all science communication, that to a wider audience as well as that within scientific journals. It should always be clear and comparable what you show in graphics. Sometimes you need to change scales so that the tendency a graphic shows is visible. But it needs to be clearly labelled and maybe even additionally emphasized.

Nikita Edel

For us as an iGEM team, we will need to write a wiki. And we also want to make the wiki not just for the scientific community. We also want to make our website appealing to people who are outside of our scientific community. What tips would you give us so we can make the website more interesting or more appealing to the general public? 

Michal Rössler

Also here, images are very important. Not just data visualizations, but also photographs. I saw on Twitter that you already created illustrations that go with your project description. Those are great! Especially because readers will usually first look at the graphics and only then look at the text. Additionally, I think that including photos or other images from the laboratories can be very interesting, and also empowering for laypersons to better understand the scientific process. Additionally, maybe give some personal insights as well, and introduce the people who work in iGEM and what their motivation is.

Nikita Edel

So we close our interview, always with other dilemma questions, but this time it's with specific answers. We do this to have a comparison between all the interviews we make.

Michal Rössler

So. I always think it's good to include colleagues but I wouldn't say that d) is the optimal answer or best thing to do. I’d say it's kind of something from all of them. It is always good to get a second opinion. And to consider that there is a mistake – maybe there was a wrong setting, and if you find that, you can exclude that measurement.  But if you don't find a reason then it's best to leave it in. Also, I would try to repeat it and see if I saw a similar outliner again.

PD Joachim Boldt has been the deputy director of the Institute of Ethics and History of Medicine at the University of Freiburg since 2010. His research addresses ethical questions regarding, synthetic biology, gene therapy, and clinical ethics. Furthermore, he is a member of the ethics committee at the University of Freiburg. With this interview, we wanted to get some insights on the topic of research integrity from an ethical point of view.


Important insights: In the interview, we first discussed how research integrity presents a problem for the research community. One example of such problems is when properly performed research is compromised for personal gains. Another example is omitting publishing negative or less significant results even though they also contain important knowledge because they might not get citations nor be accepted by high-impact journals:

" The problem is also in the background of how we measure scientific excellence -Prof. Boldt


How scientific excellence is measured affects which values researchers strive to reach. This could include publishing multiple articles in high-impact journals. However, as pointed out, this might create the wrong ideals. While fraud should have consequences, he also mentions the importance of acknowledging that the academic system also carries a part of the responsibility:

" We should not forget that the whole situation is part of how we organize our scientific system -Prof. Boldt

Therefore, we also discussed which measures we can take, as an iGEM team, to address problems of research integrity and measuring scientific excellence in our community:

" I would also encourage you to be part of the processes and decisions at your faculty and at your institutions to realize and rethink the way in which we measure who is worthy of becoming a professor or getting a permanent position -Prof. Boldt

Nikita Edel iGEM2022

So, today's interview is about the topic of research integrity and fake science. And to begin we wanted to start with an exciting game we had in our ethics lectures. It's called the dilemma game. And we wanted to ask you one question out of the dilemma game. You read it here.

So, this scenario is:

“After a couple of rounds of reviews, I discover an error omission in data analysis in a co-authored manuscript. At this point, the paper has almost been accepted, and the reviewers have never made any remarks about the data analysis. I know that my co-authors do not want to miss out on a chance to publish. I was not the prime person responsible for this part of the data analysis. What do I do?”

 PD Dr. Joachim Boldt

And now you want me to answer the question? Well, this is a real test because you haven't shown the example to me beforehand, so I'm answering out of my mind. So, the first thing to do is I would talk to my colleagues about my observation. There seems to be an omission, maybe I'm wrong. You never know. Maybe I have misread myself. What was written there, so contact the co-authors, talk to them about it, and if that is an error or an omission, then contact the editor of the journal pointing to this problem and asking to come for an opportunity to either correct the manuscript or have a supplement included with the correct datasheet, or something like that, but that should be no problem at all, I guess.

 Nikita Edel
Good. So the first question would be, what does that mean for you if we talk about research integrity or fake science?

 PD Dr. Joachim Boldt

The question for me would always be what kind of ethical mistake or act would that be? What sort of principles would be neglected, and how serious is this issue? What is at stake in the ethical meaning of the term, and the answer would be I guess, multifold. because research integrity has different purposes.
One, of course, and perhaps the most important one is that you want the results to be reliable, you want others to be in a position to be able to also perform the same experiments and confirm the results.
It has to do with the reliability of your research results, which then in the end is part of the whole progress of science and progress of knowledge, and that of course is an ethical value. We want science to progress because that means more knowledge. That means more ability for us to improve on the ways we make use of our resources and so on.
It's about societal progress, so that's one issue, but another one, I guess, also, of course, has to do with relations to colleagues with trust, with people not playing with open cards to others, and so on. Therefore, it also has this relational aspect, I guess, and there may be others as well. I guess it's a multifold issue, with the most important one being we want science to be a reliable source.

Nikita Edel

And in that meaning, what ethical problems may arise in the scientific community if someone does research misconduct, meaning they don't follow the rules?

 PD Dr. Joachim Boldt

Well, one of course is that you spread mistrust and that people tend not to believe each other anymore. If you know that a certain percentage of people are fraudulently misrepresenting their results. Then of course you will always be on the watch, and you will perhaps not so easily share your own data with others. So, the middle or long-term effect of this conduct being something more or less frequent will be that cooperation, scientific cooperation will become more difficult and more reluctant, and that will have an adverse effect on scientific progress as a whole, and we don't want that. I guess that is the main issue.
Besides all those individual issues of people advancing their careers at the cost of others because they have some great publications, perhaps due to their fake data that others don't have. Maybe they have a greater number of papers available, so there are all those systemic effects. The problem is also in the background of how we measure scientific excellence. And people can, if they fake their measures, advance their own careers, and take advantage of those who are the honest ones.

How to find out how excellent is research? And again, this is not what we want of course, from an ethical point of view. So those individual collegial issues also play a role, of course.

 Nikita Edel

To maybe take it to a wider view: There are certainly some cases, for example, of manipulation of medical studies, where this manipulation doesn't only affect the scientific community itself, but it affects the general public. So, do you think the problem is bigger or the same for the general public as it is for the scientific community?

 PD Dr. Joachim Boldt

Well, the general public. Also, with regard to medical studies, well, the first ones to suffer from bad research, in that case, would be patients. And of course, patients are part of the general public, but they are not the complete general public.
But again, you're right. I mean, we have a special responsibility to protect patients, and vulnerable groups in general, from the bad effects of bad research.
I'm not so sure though that medical research poses a special problem. Of course, it's true if we have wrong data or manipulated data in medical research, that will be a problem for patients immediately. If it is research that is concerned with some form of therapies, diagnosis, diagnostic tools or therapeutic medicine or anything along those lines.

But I would guess that at that stage, maybe I'm a bit too optimistic there, but at that stage, the regulatory framework that we have imposed on study results, publictions, on clinical studies, design and so on and so forth is so tight. Of course, there's no guarantee that nothing bad can happen. Still, it's a much tighter framework compared to other areas of science and scientific research, that I would actually think that, and for good reasons, of course, the danger of having a medical or diagnostic tool that is actually dangerous for patients, the danger as a result of manipulated data is not very big.

Nikita Edel

If we take it out again to biology and for example, with the background of Corona, we of course have I think a nice scenario where I wouldn't say fake data, but just the trust in science isn't as big as it should be, or as we expected. Do you think that fake science could lead to that the general public loses trust in science?

 PD Dr. Joachim Boldt

I think, of course there's always this kind of danger, yeah? At least I don't think that single events of manipulated research will have a measurable effect on public trust in research, I don't think so. But if you imagine that we would have a roll of those events becoming, then of course, there would be a danger that public trust would be lost. And of course, you know for good reasons I might also lose trust in scientific research if I realize, “Oh well, every new week a new manipulated research turns up.”
So, of course, there will always be a danger lurking in the background. I don't think that also regarding Corona now that public distrust if there was something like that, and there's certainly worse in some areas of society, that this kind of distrust had so much to do with manipulated or fraudulent research. I think there were other reasons why people begin to doubt scientific advice.

 Nikita Edel

We know you're a member of the ethics committee and for medicine here in Freiburg and we just wanted to ask, does the ethics committee have something to do with cases of misconduct or like that?

 PD Dr. Joachim Boldt

No, not this kind of ethics commission. There are different kinds of all sorts of ethics, commissions and councils and committees, and so on and so forth. So really, I think at least at the level of institutions such as the university, you find all sorts of those ethics committees.
The ethics committee you were referring to is the one that is concerned with research concerning humans or involving humans, which means medical research in the first place. But it can also be research in social sciences involving humans, human participants. And the only task of this ethics committee is to sample all sorts of documents concerning this planned research and then to cheque it whether it appears to be ethically reasonable and acceptable or not, and then send it back to the applicant and ask them to change some details of  their methodology or better patient information sheet or something like that or just say well it looks fine everything, go ahead.
So, it's really an application check procedure and it doesn't have anything to do with data manipulation issues.
The Freiburg University does have some sort of ethics group that would be concerned with those issues, but that would be as far as I know, and I'm not a part of that group, so I'm not 100% sure how it works, but as far as I know, they will be alerted if some suspicion turns up that there has been fraud in research, and then they will be called together and check this individual case.

There is at the university hospital a clinical ethics group of which I am a part where we assist physicians and healthcare workers in ethically demanding situations in patient care. And there is an ethics committee, of which I'm also a part at the university, which is concerned with dual use, research of concern, which is a new one. We will have to find out how working procedures actually will be. So, there's a broad and diverse landscape of such ethics committees, each with its way of working and its own task.
The one having to do with fraudulent research really is, as far as I know, dependent on someone else raising suspicion about some sort of research. You know it doesn't check by itself, you know, is everything OK? At our university, there's no watchdog sitting in checking publications from Freiburg University for erroneous data uses or something like that.

 Nikita Edel

Of course, we as an iGEM-team also want to know how we, as young researchers, in general can educate ourselves? Or how could we ensure that we as a group act more ethically and with more integrity?

 PD Dr. Joachim Boldt

One way actually, I think would be to become aware of how the systemic framework in some senses also makes possible or even induces fake research. And I've mentioned that before because I think part of the problem really is how do we measure how good research is and we do measure often along the lines of quantitative statistics:
How many publications? How great was the impact factor where the publication was published? And then you have the numbers, and it looks very objective because you appear to be able to calculate who's the best. But the problem with this always is that you can try to publish many, many papers and try to of course to publish them in high-rank journals. And you can enhance these measures by using or by producing fake science if you do it cleverly.
In medicine, for example, one effect of those measures is that many people cited as authors may actually not be authors or only in very minor roles involved in the paper published.
So that’s one of the problems, but another one, of course, is you produce your own data with some very spectacular results in order to be able to publish in high rank journals. An effect would be the publishing of negative results. You had a hypothesis, and it couldn't be confirmed by your experiments, which is also important for science. But try to publish such a result in a high-rank journal. It's not possible. Nobody is interested in those kinds of results. They are too boring.
And if you, as an individual, cannot resist these temptations, then, of course, there's a danger that you will turn into a bad scientist and make up your own data or do other bad things.
And I think really, of course, you can then blame the individual and say, ah, you are bad. You have done something ethically wrong. We don't want that, and of course, that's also right, and we should, I guess, blame this person.  Do that, but, on the other hand, we should not forget that the whole situation is part of the way in which we organize our scientific system.
So, I would actually advise you to first be aware of those systemic effects in order to be able, as an individual, to resist and say “I will not do that. That's not good, even though I know I will have an individual advantage of producing fraud research, I would not do it anyway”.
But at the same time, I would also encourage you to be part of the processes and decisions at your faculty and at your institutions to realize and rethink how we measure who is worthy of becoming a professor or getting a permanent position and so on and so forth. So, remaking those institutional structures, I think, really, that's an important issue for us at the moment.
There's no perfect solution, but we need other ways how to measure who is a good scientist and who isn't.


 Nikita Edel

So, in the end, just because we are a really big fan of this dilemma game, we wanted as a closing to have another question. But this time you have the normal four answer answers you could give.

“When screening my data, I find that there is one extreme observation. What do I do?”

A: “Nothing, it's a part of the theoretical sample for a reason.”

B: “Look for information on the observation. Trying to find qualitative reasons for the defiance. If there is a good explanation for its position, it must be a niche observation since it's part of my theoretical sample, so I leave it in. However, if there is no explanation for its position, I leave it out as it is there either due to measurement error or responsive response bias.”

 C: “I look for information on the observation, trying to find qualitative reasons for the divines. If there is a good explanation for its position, it must be an issue observation, despite my theoretical sampling, there is no place for such anomalous observations, so I leave it out if there is no explanation. If there is no explanation. For this position, I'll leave it in to avoid potential sampling bias.”

D.: “I'll let the colleague review the data and follow her advice, whatever it is.”

 PD Dr. Joachim Boldt

Yeah, D is easy, because you don't have any responsibility for what follows.

Then I'm not a researcher using quantitative measures myself, so this is methodologically speaking unknown alien terrain for me. But as an immediate answer, I would say well, whatever you screen as your data you have to work with. You have sampled those data, you get it, you screen it. So, of course, you leave everything in, and then maybe in the discussion section or conclusion section, you would point to this special data point or something like that and make some arguments why this is an interesting observation. Why this is interesting data or why it isn't? Why it could also be left out and so on and so forth, but this reasoning should be part of the transparent heart of your published research.

This is, I think, what I would say, so, leave it in and discuss it and then all your colleagues can check it and come up with their own assumptions and maybe reiterate the expense or have their own data sampled and see whether this phenomenon turns up again or not. And then the question “Is this data point reliable or not?”, is left to the scientific community to be answered and not to your own individual reasoning about whether this is OK or not.

 Nikita Edel

Thank you very much. I think that would be it.

Dr. Chris van der Does has been a senior scientist and group leader at Freiburg university since 2014. Besides doing research in microbiology, he also teaches lectures in biology, e.g, an ethics course for biology bachelor students. With this interview, we wanted to get to know how researchers address issues regarding research integrity and which role this subject should play in the biology programme and the education of budding scientists.

Important insights: In this interview we overall discussed which responsibilities you have as a researcher or student in regards of preventing misconduct. We also discussed that having an ethics course once in your bachelors does not provide sufficient training in the field of research integrity. Ideally, discussions about integrity and ethics should reoccur also in the masters, PhD and as a part of work discussions in your workgroup:

"“as a supervisor you really have to learn your students how to read papers, how to check if the proper controls are there”

n this case the Professors have a responsibility to pass on critical thinking to their students and regularly discuss what well presented data and good conducted research look like. Another responsibility which Dr. van der Does pointed out, is to create an open working environment, where the students are not afraid to tell if they have made a mistake:

" for me it's always important in the lab that if mistakes are made that you are not too strict on people that make the mistakes so they are not afraid to come to you to tell them -Prof. van der does

However, there are also several responsibilities which you as a student have to be aware of. One of those is to find a group where you feel comfortable and have good communication with the professor.

" I tell master students who are looking for a group where they want to do their PhD: make sure that you have a supervisor where you have the feeling that you can say something like this -Prof. van der does

We also discussed which effects might cause scientists to publish fake science. One of the things which are very known is the pressure to publish: 

" If you don't publish are essentially out -Prof. van der does

Jonas iGEM Freiburg  
We are the iGEM team from Freiburg. This year we want to address the topic research integrity and fake science and we want to ask different stakeholders about their experiences and learn how to address research integrity and in general learn more about the topic of research integrity. As a first question for you, we wanted to ask you what you consider as good scientific practice? 
Prof. Van der Does  
So I've been thinking for a longer time on this and I've also tried to look the internet for a nice definition and the one I like the most is: 
Honesty to yourself and to other researchers. 
I think that very simply defines for me, scientific integrity. That you know with yourselves, that you're honest to other people. I think if you follow that rule then normally you are, 100% of the cases, OK. The U.S. Department of Agriculture has a very long definition. But I think the one that is just honesty is the most important one for me personally. Well, I also looked at the European code of research integrity which is published, I think that every 10 years they make a new one and the latest one they mentioned actually four different topics. One of them is again, honesty. The other one is reliability. That you make sure that you can reproduce things. The other one is respect for colleagues, then that you are honest to other people, and accountability. That you perform the experiments also in an ecological sociological perfect way, but for me I think they all come back to honesty. I mean I think most scientists want to do their best and find something new and be honest about that. 
Jonas iGEM Freiburg  
Ok, so next we want to have a short question game. You maybe know the dilemma game already because we had it in in your ethic lectures. We choose this question “so close” I can read it out loud one time. 
After a couple of rounds of reviews, I discover en error/ omission in data analysis in the co-authored manuscript. At this point, the paper has almost been accepted and the reviewers have never made any remark about the data analysis. I know that my Co-authors do not want to miss out on a chance to publish. I was not the prime person responsible for this part of the data analysis. What do I do? And we left out the answer options this time because we really want to have an open answer on this. 
Prof. Van der Does 
This is for me a very easy questions, I will immediately tell everybody that I found this. Actually, I have to remark that it will be different because I'm a scientist that is further in his career. So for me, if I miss out, I have 80 papers it's not such a big problem for me. Of course, if you are a PhD and it is your first paper, I can imagine that this is more difficult. But I always say this is so incredibly damaging to your career. It is better to have a setback for a few months, and I would always say it and say it as fast as you can. The moment you discover it, you have to, if you don't, the problems get bigger and bigger. So, in this case I would say sorry, OK, we've almost had this paper accepted, but this is not OK. We need to correct it and also immediately inform the editor that you solve this. 
Jonas iGEM Freiburg 
You spoke already about, that in the position of a PhD it will get more difficult because it might be the first paper, so. If the authors are maybe also more advanced in their careers and they refuse to see that as a as an error, what could you do as a PhD? 
 Prof. Van der Does 
I understand that it's very difficult, but I think it is really important that we learn our students that they make a point against their supervisor. I tell master students who are looking for a group where they want to do their pHD: make sure that you have a supervisor where you have the feeling that you can say something like this. Yeah, we know that there are extremely good groups. Where the head professor is very direct and his it's his way and it's the only way. I would try to avoid these groups, because I think those are the kinds of groups where these mistakes happen. We need to have a professor that pushes you, we need to have a professor that is strict with you but you also need to have a professor that listens to you and if you have concerns you need to be sure that this is OK. Yeah, and I think this is also an answer to your question. Always be honest. Because it's science. You will make mistakes, everybody makes mistakes. At a certain moment you will find out if you make a mistake. Yeah, you will feel bad that you made a mistake, but immediately tell your supervisor. If you wait a week, it gets more difficult. If you wait a month, it gets more difficult. The longer you wait, the more difficult it becomes to correct this mistake, and if it's published it's even more difficult. First of all, make sure that you tell it, and second of all, make sure you have a supervisor to whom you can tell it. 
Jonas iGEM Freiburg 
What might be other factors that especially budding scientists in particular, lead them into falsifying their work? 
Prof. Van der Does 
Yeah, I think you mentioned it a little bit, I think only for budding scientists, of course, you want to have your first few publications, and I think that even goes to people in their post doc, people that have their first group. I think the pressure is incredibly high to publish. We discuss it also in the lectures. I mean, it's publish or perish. If you don't publish are essentially out. And I have to say how to get over this except for being honest to yourself, I don't really know. I see the pressure in some people, and sometimes you really think that you can understand why somebody would be tempted to do something like this. But there are disadvantages of doing it because it will always be found out. We need to make sure in the scientific community that the disadvantages of being found out are bigger than the advantages of doing something like this. 
Jonas iGEM Freiburg 
OK, and a little bit more personal because you have already a lot of experience. You have already worked many years as a scientist. Did you also experienced or come in contact with fake science yourself and when? How did you deal with that? 
Prof. Van der Does 
Yeah, I think every scientist will meet situations and I'm sure that you will also meet situations where another group has performed experiments and you just can’t repeat that. Then there is always the question; is that then fake science or are you doing something wrong? There I find it very difficult to say something. For example, a paper was published. And two years later you look at the paper and you notice that the figure from this paper is the same as that was used two years ago. 
Yeah, that's done in a group which you are befriended to. And then you found out that the PhD had left for two years. When they asked for the figure, they just picked from their data. So, they picked the figure that they thought were correct so it's not on purpose, but it's of course it's a mistake, yeah, and it means it just needs to be corrected. If you find that, you need to go to the journal. Nobody else has seen it, but I found myself saying that this picture is wrong. We need to replace them and then you get a correction. Which is maybe negative, but you always need to do that. 
And other examples that I've seen this is during or after the PhD defense. It's found out that large pieces of text have been copied from other papers. You very often have a beginning pHD student that doesn't really know how to write nicely. Then of course if you have a perfect text somewhere else you just copy that into the introduction. That doesn't feel so bad of course, as copying results. Which is of course really bad, but also your introduction, you should write yourself. Those are things that unfortunately I've seen happen to people. Fortunately, that was always caught before published, never after publication, but even before publication that should never happen. And especially with the methods. Nowadays, I think there are so many plagiarism cheques. If you do it, you will be found out.  
Jonas iGEM Freiburg 
OK, so you already spoke now about a few tips you would have for budding scientists. Do you have some tips in general, for when we, as an iGEM team come into a critical dilemma situation regarding fake science. 
Prof. Van der Does 
Be honest to yourself. Be honest to your supervisor and if your supervisor doesn't listen, go to somebody else. 
Yeah, or even here when you see me in the ethics lecture come to me, then I can help you go to the Ombudsman or something because I realise that as a master or bachelor student it is really difficult to step up to your professor. But you just need to do that. Because this might follow you the rest of your life. You publish something that is wrong or you write something in your masters thesis where you know it is wrong. In the end people will find out. 
Jonas iGEM Freiburg 
And another point we mentioned already that you are teaching. You teach the ethics lectures for example, but also I know from the bachelor you also teach in other courses and so on. Does the topic of fake science and research integrity play a role in your teaching? 
Prof. Van der Does 
Of course, in teaching it's very clear. I think it's critical that all students listen to that lecture. And I think also I actually think that it's important that we will give that lecture again for the PhD students. That we've all really heard that. I think the other point where it's important to teach to people is when they do their bachelor studies. When they do their masters. So that you discuss with your students, the plagiarism value, the falsification. Before I mentioned, take a supervisor that listens to you and for me I would say. Then be a supervisor, who people think will listen to them. For me it's always important in the lab that if mistakes are made that you are not too strict on people that make the mistakes so they are not afraid to come to you to tell them. If people are afraid to tell you their mistakes you are on the border of scientific fraud I would say. 
Jonas iGEM Freiburg 
What further role could it also play? Or should it play in teaching? For example, other lecture alert lectures than ethics. So for example you mentioned before the bachelor thesis maybe, but should it also play a bigger role generally in the bachelor thesis or studies. 
Prof. Van der Does 
I think as I work here, I think what we do with our good scientists practice lecture. I think that's enough for the bachelor. Like it, I think, especially in the master thesis it should come back again. And in your work discussions, you should give examples of things by other groups where you don't agree. As a supervisor you really have to learn your students how to read papers, how to check if the proper controls are there. Almost every supervisor does that, but that's an essential part, detecting fake science. 
Jonas iGEM Freiburg 
We want to go to schools this summer and make a small lecture about fake science and research integrity to spread a bit more awareness about this important topic. How can one make other people more aware of fake science? What tips do you have, especially for teaching that topic? 
Prof. Van der Does 
You can happily use my lecture. I think you wake up people to these kind of things if you give them examples from the real world. In the lectures I discussed Andrew Wakefield, the “impfgegner” [vaccine skeptic] I think people always forget how many things he did wrong in his publication and I think just the list of the 14 things he did that are against good scientific conduct. I think that makes it clear. 

Jonas iGEM Freiburg

So now we are nearly at the end of the interview and we have another dilemma game question for you. It goes about outliers. I can read it out one more time. It's called 

”When screening my data. I find that there's one extreme observation. What do I? There are four possible answers. 
a) Nothing, it is part of my theoretical sample for a reason. 
b) I look for information on observation, trying to find qualitative reasons for the deviation. Is there a good explanation for its position? It must be a niche observation since it is a part of my theoretical sample. I leave it in. However, if there's no explanation for its position. I leave it out as it is there either to my measurement error or response bias. 
c) I look for an information on the observation, trying to find qualitative reasons for the deviation. If there's a good explanation for its position, it must be niche observation despite by theoretical sampling, but there's no place for such anomalies observation so I leave it out. If there is no explanation for its position, I leave it in to avoid potential sampling bias 
d) Let a colleague review the data and follow her advice, whatever it is. 
Prof. Van der Does 
Now the good answer is of course: A) yeah, it's part of the data set, and you should just keep it in. So sometimes it happens that you have one point that really lies out and I would say I would only take it out if before I take the point, I already have the feeling that something is wrong with the sample, but if I notice that the point is wrong only after I've done the experiment, then I keep it in, then it stays part of the sample and it just means that you have to repeat the experiment more often to get the error bars smaller again. That's science.


Prof. Dr. Sonja-Verena Albers is a professor of microbiology at the faculty of biology at the University of Freiburg. Her research field is the molecular biology of archaea. Besides that, she functions as dean of the faculty and is a member of the commission to secure “Redlichkeit in der Wissenschaft” at the University of Freiburg (securing good scientific practise). With this interview, we want to focus on the topc, how fake science can be detected and prevented in university structures and potentially also in the iGEM competition or other organisations.

Important insights:

During this interview with Prof. Albers we discussed different aspects of research integrity. We talked about her previous experiences with fraudulent data, and her opinions on how to tackle these issues, whether they are within her own research group or at the university. We talked about her work at the committee for good scientific practice, and gained further insights into how we can improve our own good scientific practice measurements. 

To start the interview with Prof. Albers, we discussed research integrity and transparency in the research publication process. We discussed the importance of being honest and clear not only when describing and analyzing data but in every aspect of the publication, such as methods.

" [...] the methods and materials are often, I think, the most important part for the people working in the lab, but they are mostly, I think, badly written -Prof. Albers

We asked Prof. Albers about her personal experience with fraudulent data publication, and her takes on how to solve these issues, whether the issue lays in your own publication or research, or those of your colleagues. She highlighted the importance of keeping raw data files to ensure you can always come back and re-analyze or provide the data to independent researchers for transparency. 

" What's really important is when you write an article that you have like all the original data -Prof. Albers

To improve our own project and research integrity, we discussed how a good scientist acts and thinks. 

" I think a good scientist is somebody who's really enthusiastic to learn more about the system that he or she studies and does this in an open way -Prof. Albers

Furthermore, we talked about the structures at universities that ensure research integrity and prevent fraud to better understand the systems that keep our research integer. We also talked about the future of research integrity in the next years, and what awaits young scientists in the field of biology and synthetic biology. 

" I think we will see in the next 5 - 6 years a big change in the system, but I'm not totally sure where it will go. But it's clear that the generation today is on the big pressure. That's really clear. -Prof. Albers

Finally, we wanted to know how we could improve our research integrity and prevent biased, falsified or wrong results. Prof. Albers recommended using an open lab book as well as an open discussion policy about the results we achieve in the lab, which we decided to implement in our daily research lives during this iGEM project. 

" I think that what I said, I guess you guys I think have to do like an open lab book so that in principle everybody can see exactly what you have done and also be open and report to each other a lot now so that you know, as I said, you have to trust each other -Prof. Albers

Jonas Widder – iGEM Freiburg

OK, so this interview is about research integrity with a focus on system of detection, reaction on and prevention of fake science at universities. Today, professor Albers is our guest.

To start the interview, we wanted to ask you what constitutes good scientific practise for you?

Prof. Albers

What constitutes good scientific practise?

I think that one of the things that I find most important is of course because we are doing experimental work, that in principle everybody outside of this lab should be able to repeat the experiments that we do and the results that we get are reproducible. I think this is something that I find extremely important because, in your science, you want to do things that cannot only just be done in your lab, but it should actually be indeed what we call an open science.

That it's clear how you did things. Yeah, I think that reproducibility is one of the big buzzwords that I find really important.

Because if you work with a special system, you want somebody in America to be able to do the same thing, because you don't want to keep it for yourself, it should be open for everybody, that's the point of reproducibility. And there we come to the question of how to do that?

The most important things about how we produce results are in our manuscripts and publications. There, the methods and materials are often, I think, the most important part for the people working in the lab, but they are mostly, I think, badly written, usually because people just want to have the cool results but don’t describe in detail how they did it.

In general, yeah, so I think that's something that I find really important.

Jonas Widder – iGEM Freiburg

Awesome, very interesting point. We also discovered that issue during our iGEM project.

Prof. Albers

Yes, yes, I mean, you read the methods and then you think „Oh yeah, I just do it like it is there. “

And then you realise, „OK, it's not working “, so that's uncool. And if you don't actually talk to the person, because in science this is what you do, contact the author, and they tell you, „We have these little extra steps. “

I said „ah, that's interesting. So why is it not on the protocol? Also, in the publication, right? “ And this is always the thing like is it indeed written that, well, that somebody else can do it?

And it's especially important for iGEM. I mean because you have to do everything open, right? That's really important. Can somebody else somewhere in India do the same thing or not?

Jonas Widder – iGEM Freiburg

Nice. We have the dilemma game with us and we wanted to ask one or two questions of it. The first one is called „so close “. I read it out one time: „After a couple of rounds of reviews I discover an error/omission in data analysis in a co-authored manuscript. At this point the paper has almost been accepted and the reviewers have never made any remark about the data analysis. I know that my co-authors do not want to miss out on the chance to publish. I was not the prime person responsible for this part of the data analysis. What do I do? “

Prof. Albers

I mean, I would for sure contact the last author and say „OK, I discovered this mistake, and we should actually re-analyse the data” because I mean if there's a really clear error then we have to fix that and otherwise I would propose to remove me from the publication.

Because that's not something you want.

I mean, it's hard and I get the dilemma. Especially if you maybe talk about a publication in nature or science or so right? I mean, this can take two years. But you never want to have an error published if you can prevent it.

And maybe I can actually bring it, so I actually experienced such a problem in one of my articles.

 I'm working on a certain project, and I published a paper on that project. I don't know, let's say 2004, and then another one in 2008. And when I came to Freiburg, we actually, because it was about electron microscopy pictures, I went here to somebody said, “OK, this is how the things look like. This is what we want to do with you.” Then she called me actually, saying:

“I found something strange in your two articles published in 2004 and 2008.” I was like “Ok, what?” It turns out that in both publications we had used twice the same picture. That's not good, right? I was really shocked, but it was the same first author. And I was the last author on both I think. But this was like 4 years apart, and the same picture was used, but it was kind of turned around and cut a little bit. I was totally shocked. I was like what the f***. I mean I'm the last author, so theoretically, I should have seen this. But it was four years apart. And I mean as I said it was not something that I directly saw. When you look at it in the end, you likely see because the person from here, she put the two papers next to each other and then you could see it. And nobody has ever seen that, so we were not accused of anything. So, I was going to the first author who now is somewhere else. And I was asking, „OK, what about this? We cannot leave it like this. So what has happened here? “. In the end, it was not really clear what happened, I can still not say why she took the same picture, but we made a correction. Yeah, because you cannot leave it. I mean, nobody found it, but still, it's about a dilemma. Yeah, I mean, I’m the last author. I’m responsible for this.

And sometimes this is really difficult, right? I mean because I have to trust my people.I mean you also know that now. Right now, Barbara, I think is head of the team. She cannot control everything, right? She has to trust you guys from what you do daily and what you report and this is something that we also have to.

And I mean I'm a professor, I'm Dean. I don't see what my people do every day on the bench, so there's a big point of trust. Yeah, and this is something that you have to do to people in the lab because I don't want to distrust people, but it's indeed about, what do you produce and is that correct or not, yeah?

So, this is a really difficult issue for us. Because we always saying the last author should have seen that. I have not seen that; it was four years apart. I mean there was no intention. I mean this is not a big story. We don't need to fake any things there, but the PhD student is, and I still don't know why, she couldn't explain to me. Now we had to correct it. We did it. And so that was fixed and there was no fake news in that sense. Sort of all corrected, but you have to be able to kind of trust. And that's a really big issue. And that's sometimes difficult, so how should I control what they do daily and what they report is correct, right? I mean we are working with an electronic lab book. So that's already different than the paper thing, so I can in principle, cheque, what they do. But they of course also filter what they write in there, right?

What's really important is when you write an article that you have like all the original data. This is also with journals, they ask you to provide the original data. So, because if you make a western blot and you make it further presentable for the paper, you kind of cut out what's important. Nowadays they ask to show the whole picture just to show that you're not just cutting out what you like.

Because that's very easy. I mean with Adobe Photoshop today everybody can do that. 

I mean, Chris, I think talked already about Elisabeth Bik. I see her on twitter all the time. It's amazing. I mean I have to say this really shocks me every day what scientists do to produce data. So, if you look at what she finds, I don't know why people do that because that's not science anymore, right? 

So yeah, it's about trust, and the question is, how can you control as a PI? 

That what you get from your people is not fake. It's really difficult, I think. So, it’s very easy always to say, the last author should have seen that, but there's a big pipeline between, and that's something that's really difficult to control, I think. 

And I said trust because I also once had a certain student:

Because when you do science, I mean you talk about you have a hypothesis, right? So, you have a hypothesis, then you come up with an experiment and you talk with the student about what you think might come out or not.

But science of course is open, right? I don't tell the student this is my hypothesis, and I think I mean this is the right hypothesis and you have to produce the data. This is not how it works, right?

But of course, you go into a research question with an idea.

But we found out after a year that the student in question was producing data to fit our hypothesis, and that I mean, again, that's something you cannot control because it's very difficult to find out. I mean it was just by accident that we saw that he was actually doing this one experiment I don't know 100 times to get the data that sometimes fitted the idea that we had. I was like this is not science.

I mean, I just have an idea and that doesn't need to be right. That's why I like science. I mean, you get surprises, right?

So, this I mean, this is a very difficult grey area. Yeah, and then you also get like supervision things and stuff.

Jonas Widder – iGEM Freiburg

That already brings up many points for our next question, because as you already mentioned, you are Dean of the Faculty of Biology and also head of the AG microbiology of Archaea at the university of Freiburg.

How could you find out about violation of research integrity at the Faculty of Biology but also in your AG and secure good scientific practise? For example, if someone really does fake science?

Prof. Albers

That's a good question. I mean I think I already kind of covered what I'm trying to do in the group, right? I mean we have very often work discussions where people should only show their raw data. So, I don't want to see the cut blot. I want to see the whole blot and things like that. So, as a PI you can only try like that. But as I said, sometimes it can be difficult because people decide what you see and what you not see right? That's something that you really have to kind of get a feeling for and which involves a lot of trust.

I think you have to make clear to your people: how you can phrase that? I mean the problem, you know, so I mean that leads to there's a lot of pressure, so why do we do this? Or why do students do that? Because they want to have a good paper, or they want to have nice results, and you have to make sure that they know that this is not going to save their career or their life or whatever. I mean, for them, it's at that point really important and has to be clear that you cannot get that by faking, and this has to be clear in the group.

And of course, I mean as the Dean of the faculty, I don't have any control over that. I mean there's no committee that would now check publications of the other professors or so. And I think this has not as far as I know, also in the Community for „Redlichkeit in der Wissenschaft “, I don't know how you translate that. I have not seen I think any case from the biology faculty so far, but that's from the last four years or so. Actually, I think faculty-wise, there's no control or no organisational committee that would look at that or find that out actually.

Jonas Widder – iGEM Freiburg

OK, and another thing that came up in our mind was, if we find out that in our own working group there is something suspicious going on, someone else might fake data. Which channels are available to report that? Or maybe not directly saying that another person is faking data. But if you have the feeling there might be something wrong. Which channels are there available to report something like that?

Prof. Albers

Yeah, I guess that would be the “Ombudsman” or “Ombudsfrau” for them. They are kind of dealing with it if supervision doesn't work, but of course, the Ombudsman would also be important for this because that person would know whom to contact and what to do. Because I, I guess there should be first discussions with the group leader, and I mean there are certain rules that should be followed that would be always via the Ombudsverfahren.

Jonas Widder – iGEM Freiburg

You are a member of the Commission of „Redlichkeit in der Wissenschaft“ at the university of Freiburg as a so-called ombudsperson. Can you shortly explain first what the „Ombudsgremium für wissenschaftliche Redlichkeit“ in Germany does, especially in its role securing research integrity?

Prof. Albers

This is in principle what we just said. If there's somebody saying, bringing up the point that maybe also in a thesis or in an article, some experiments are not OK, are misleading or faked, indeed, then the committee has to look at that.

In the committee we always have people from the different faculties so that there should always be somebody who can kind of look at that. Because I couldn't look at a book from philosophy or so, but that would be more like biology related things that I look at. Then you try to look at that yourself and then you would involve other external people that would look at that again to get reviews about that, and then it would be at a certain point discussed in the committee and then there has to be a decision at some point. And then it really depends on the level where we are. Whether this is just one article or whether it is like in several articles.

What should happen afterwards? Yeah, because if you're talking about the thesis, you would say OK, these cannot be taken valid anymore and then you would remove a title from somebody. If it‘s about articles at a certain moment, if it‘s very grave, then the university might have to take action.

If that's the professor of faking 10 years of science, then this has to be like a really high level.

Jonas Widder – iGEM Freiburg

So as I understand, the Ombudsgremium is only acting when there comes someone saying this or that it's wrong. It is not checking by itself.

Prof. Albers

No, no, this is not indeed our task. So, we only deal with things that are brought to us. That's an important thing.

Jonas Widder – iGEM Freiburg

OK, just to clarify, you already mentioned that when a professor is faking over several years, if a misbehaviour in terms of faking is detected, it means the end of the career of this person. Or is that not always the case?

Prof. Albers


Jonas Widder – iGEM Freiburg

So, what is the concrete reaction when someone is faking?

Prof. Albers

I have to be careful there because I think this is a legal thing and I don't know actually what the legal rules are for that.

And I think, this is always I don't know whether you are on Twitter, and I mean there has just been this big case where people faked the Alzheimer research. And I think people are now complaining about that this person is actually not really removed, but gets another NIH grant and you're looking at that and thinking “OK, how can that be?” I think that's really like at the university level, so the rectorate, I think they have to decide what happens then.

I don't know. I mean I cannot remember that since 2014 when I joined here something like that happened. So, I don't know what the university would actually do, but that's really, I guess a really difficult decision because the question is how far do you have to go to be removed as a professor? Because it means you cannot do anything else in your life, right? I mean, your career is over. I don't know what to do then, so it's like, yeah, that's really difficult. And I don't know what the official rules are.

So, whether there's like a stepping thing like if you do this for 10 years, you have to do this or if you have fake ten articles, then that so I don't know.

Jonas Widder – iGEM Freiburg

  1. Now we already discussed some detection systems of fake science in Freiburg and how to secure research integrity here.  Do you know other similar detection systems to maintain research integrity at other universities in Germany but also around the world? Or has each university its own detection systems and how to secure research integrity?

Prof. Albers

I don't know actually really. I mean the question is, of course, whether we can call how we behave a system really or detection system.I mean, in the in the end as you said, it's more if something like this is found, it's more like a personal thing, right?

That somebody says, OK, something is wrong here or that you have a person like Elisabeth Bik who says, this is not OK, and you of course have this website peer pub I think it is called [edit from iGEM team Freiburg: it’s called “pub peer” ],. Where actually in manuscripts if somebody finds some mistake, actually they post it there. And I sometimes check. I mean you don't want to be on there actually that's a website where you don't want to be. And there you can see that people like Elizabeth Bik would post a lot there and to make people aware of the fact that people peer watch.

So I have to look it up. I think this is a really important general system where people really kind of post things, this is not OK, and then the authors can actually react to that, and there very often you see that the answers are really fishy and not ok.

But it seems to be really difficult to get articles retracted because it happens very often. So, Elizabeth Bik is commenting on it all the time. I mean, she finds it in nature, science, lots of medical articles which are very obviously faked, and they are not retracted. And I mean then you have to ask nature and science how this can be, right, because as we know on the Alzheimer's case that can really influence a lot of people and a lot of research or pharmacy development of new medicines and you have to kind of get people to do the right thing, right? Indeed, if it's about blots and these images, I mean images are so easy to fake nowadays. Everybody can. You can also make a nature paper if you want, right?

So, it's I think like if you talk about that, so there are more and more people looking into these image forensics and post it on websites like pub peer. And there you can really see, you can just cheque articles and see the comments where the authors have answered. But then indeed retraction happens not very often. So that's the problem now.

Jonas Widder – iGEM Freiburg

OK, so now maybe a bit complicated question. Do you think that with the ombusgremium and these systems research integrity is maintained at high standards in Freiburg? Or where would you see potential to improve to secure good scientific at the university?

Prof. Albers

Yeah, I think that's really difficult indeed.

I guess we all say we are following the rules of good scientific practise, and I mean in the studies you get lectures about that, and the professors should really know about this, but it's not that we get courses on that again.

I guess that when I got the certificate for being professor, you have to sign that you're following the rules of scientific practise.

I think that was part of that, so this is the only time that we kind of say “I’m gonna follow the rules.”

And yeah, as I said, I mean as a Dean I‘m not looking at the articles of my colleagues to see and cheque how they're doing, right? And there we have trust again. I mean I would trust all my colleagues that they're doing no fake science, but I cannot be sure, right? So, I think this is the situation, I guess. I would say, the university wants of course all of us to really follow the rules, but the question is how you can enforce that.

And I don't know how you could really do that. Except for as I said, we have to sign as professors that we will of course going to ensure that we are not publishing fake science and that we're following the rules.

But there's not a yearly check. And that I mean, I think there's nobody that will test our articles whether there are any problems or not.

Jonas Widder – iGEM Freiburg

So, it's much about trust, like you said.

Prof. Albers

Yes, I think so yes, yes.

Jonas Widder – iGEM Freiburg

  1. We also spoke with other interview guests about the topic of research integrity and one statement was „The production of fake science is to a large extent caused by the system, how we measure how good a scientist is. “

His opinion was that we need to improve the system, how we measure good scientists? So, first of all I want to know what makes a good scientist in your opinion.

Prof. Albers

What makes a good scientist?

What I said in the beginning, I think a good scientist is somebody who's really enthusiastic to learn more about the system that he or she studies and does this in an open way. Because as I said, I mean as a scientist, you always have a certain hypothesis and this is why you think of a certain way to prove that hypothesis. And then you get the outcome. But what is indeed really important is that you're open to the outcome. So, we should never be like this, my hypothesis has to be right. I've proven I think more times wrong in my hypothesis than right. And that's what makes a real good scientist.

You have to have a hypothesis to think of an experiment that you want to do, but you have to be open and that's I think, that's the beauty of science, actually. 'cause every time that something else comes out I actually be like, “Oh, that‘s cool ”, and then you go on right? So, this is what you want. Yeah, I think that's a good scientist.

Jonas Widder – iGEM Freiburg

OK, convincing. And so, regarding the statement of this person that we need to change how to measure what good scientists are. Why and in which parts do you agree or disagree with this statement? And if we need to change the measurement of, he gave the example of how many publications you get and in which high-impact journals. Do you agree with that or not?

Prof. Albers

Absolutely. I mean this is the whole problem with science, but you have to somehow measure the quality of a scientist. I think down there at the moment there is no good system to measure how successful I am.

I think all the public metrics and stuff is just very difficult. So, I have to say that when I started doing science as a diploma student, I really liked the way in Germany. Also, the PhD studies because you were not that article driven in that sense, you would just do research for three years, which is great because you're just interested in a question. You just do your work.

I then went to the Netherlands to do my PhD, and there you get told at the beginning you have to have four or five first author publications then you can get your PhD and you're like “OK, that's a different thing”, because then you get what you said in the interview before.

If you think about “OK, how can I put this in a publication?”

And I think a mix of that is fantastic. If you can do a little bit like saying, “OK, I'm just curious. I just want to see what happens here.”

But for your students, that's the problem. You have to get publications because if they want to stay in science, they need the metrics. They need the good papers. I don't know when this will fall. I mean, people are already now saying we should not look at impact factors.

Because sometimes a publication in nature doesn't need to be better than a publication on molecular microbiology, right? But it's about fields. It's so difficult. So, metrics is a thing that hurts the system, but I can also not give you really a solution at this moment because I mean that's heavily debated. Because after the impact factors we had the h-index, now the h-index is bad again. Now we don't know what the next thing is actually.

So, I mean e-life I think is now going to a new thing because also there comes the whole peer review process and publishing and whether you should actually go to a published Open Access, or not. And all of that it's really changing at the moment a lot. So, I think we will see in the next 5 - 6 years a big change in the system, but I'm not totally sure where it will go. But it's clear that the generation today is on the big pressure. That's really clear.

Jonas Widder – iGEM Freiburg

Interesting. I would also want to speak a little bit about iGEM and some concrete tips maybe you have for us budding scientists.

What tips would you have for us, maybe to secure research integrity during our project?

Prof. Albers

I think that what I said, I guess you guys I think have to do like an open lab book so that in principle everybody can see exactly what you have done and also be open and report to each other a lot now so that you know, as I said, you have to trust each other, but it shouldn't be bad to ask for proof, have you really done this or are you sure you have done it like this? So, I think that's also something that shouldn’t feel like you're controlling, this is always like the border of control and trust.

As I said, I also have to control, but I have to trust at the same time. Like you guys have to do right? So you have to trust what the other student is doing.

But at the same time, for the science, it's really important to document what you've done and that it's clear what you have done and that there's not a magical step that nobody can actually follow. So, I think that's important.

Jonas Widder – iGEM Freiburg

When we in the iGEM team would come into a dilemma situation, not clearly this way because we are not publishing at the moment, but if we would come into another type of dilemma situation, how could we go on with this situation or which person do you think would be the right one to address when we are in such a dilemma or maybe also when you think of your own working group.

Who could they address to get tips how to deal with this dilemma?

Prof. Albers

I guess I mean who's leading the team, Barbara and Nicole, right?

I mean the question is of course whether at which level the dilemma is right? So, if you think that I mean if it would be the case that you guys have no results and then Barbara says yeah, but we have to look good at the jamboree in Boston, so we do it like this. Then you would have to go to somebody in the faculty, I guess because I think the iGEM team is part of the faculty of biology. Then you would have to actually come to me because you're kind of representing us as a faculty and then you might have to talk to me, I would say. Because Barbara would do a bad job in guiding you in how to do open science.

If it's one student to the other, I mean then you would have to go to Barbara, so I think it's the different levels where you would have to look.

If there would be a hierarchy problem, then you would have to probably come to me, I guess.  Because I see you as a team indeed representing the Faculty of Biology, of course.

Jonas Widder – iGEM Freiburg

I also want to speak about the fact that at the University of Freiburg violation of research integrity is dealt with via the Ombudsperson. What sort of control or regulation could you think of that could be applied to iGEM because iGEM is an international competition worldwide and at the moment there is no such regulation. Overall, how to secure research integrity there.

There are these conditions on how you should follow the good scientific practise, but there's no real control at all from iGEM itself. So how could you think of how could that be regulated in a better way or in general for teams to follow the rules of good scientific practise.

Prof. Albers

I guess that would be for me really something that comes from iGEM itself, that they would maybe make a gremium, a committee where they have people controlling that good scientific standards are followed. Because you have to have this open website thing right? Do you have to present in between? Or is it only in Boston at the end?

Jonas Widder – iGEM Freiburg

In previously competitions it was only presented in Boston. We have a preliminary safety form, but it's only dealing with safety issues and other than that we have the presentation in Boston. So this year, it's in Paris actually. And also, we have our open website.

Prof. Albers

OK, because then I would expect that they actually make a committee and I would maybe ask the iGEM teams in between to for example, like half time saying “OK guys, we want to know what's your idea.” You have to show your progress until then.

Because then it would be clear how far you have made it, and then you cannot make things up so easy. I mean, you could kind of see, OK, if like three months later suddenly all problems are solved and everything looks perfect, you would say maybe we should actually look how they did and how they really did it. 

So, I would maybe install something like that, like an in between check to actually ask these questions and see how far the teams are.

Jonas Widder – iGEM Freiburg

OK, yeah sounds like a great idea.

To finish this nice interview, we have another dilemma question for you dealing with an extreme observation. And this time we have all answer options, where you can choose one of them or maybe also say all of them are not the right answer. 

So it's called outliers, and it's about “When screening my data, I find that there is one extreme observation. What do I do?” It’s a more concrete question than the first one.

Prof. Albers

I think it maybe depends a little bit on what kind of experiments you do. Because I would probably take B if I do a biochemical measurement where I isolate an enzyme and then I do like enzyme assays.

Because there it's very easy to say, “I just do it another 10 times and then I know”. I don't have only one sample because usually, you say you do triplicates, and then I have done two others that are right and one that lies out. So, I would do it another 10 times and if then I really have like 10 right or 10 the same ones and one that lies out, then I would say, B is OK. But it really depends, if you do like big microbiome data or RNA sequencing or whatever then you cannot just do that and then I think I would probably go for C.

And I mean if this really bothers me, I would always ask a colleague anyways, but yeah.  I think it depends a little bit. So for biochemistry I'm more for B and otherwise go for C I think.

Jonas Widder – iGEM Freiburg

OK, perfect. We're at the end of the interview. Thank you for being part of this series.

Trust and integrity in scientific research are of vital importance not only to the scientific community but to all of society. In the light of recent scandals, such as fabrication of data in Alzheimer’s research [1] or the spread of misinformation on social media, it is apparent that research integrity and good scientific practice need to be discussed more. Therefore, we decided to create a survey about this topic and tried to reach out to as many people as possible.  With this survey we want to spread awareness about integrity in science and get insight into the participants perception of scientific good practice, as well as getting some feedback about our own project design.

We conducted a self-made survey aimed at people who have experience in scientific work and, in the best case, have published research themselves, i.e., master's students, PhD students, post docs, PI’s/professors, etc. We were interested in what age, profession and which field of research the participants belonged to. 

In the survey we used a scope of different styles of questions. First, we adapted dilemma questions from the dilemma questions catalogue of the Erasmus University Rotterdam, which pose problems to the participants that are not easy to solve and make them think how to approach a problem that might break their good scientific practice. Second, we used simple multiple-choice questions about several good scientific practice scenarios that scientists can encounter in their daily lives. In order to use the full potential of our survey and to integrate it into our results, we have equipped the third part with questions that aim at scientific and trustworthy presentation of our wiki. We wanted to take the opportunity, when we reach so many knowledgeable people, to give us feedback on what is important in a good scientific presentation apart.

Survey on research integrity


During the planning and design stage of our survey we received great advice from Dr. Irina Sigel, who is part of the institute of sociology at Albert-Ludwigs-Universität Freiburg. She helped us formulate our questions, gave us lots of inspiration and advice on how to structure the survey and made sure we knew the important basics about properly conducting a survey. We are very thankful for her advice and the time she took to help us. Through her, we became aware of the pretest method, which we carried out in two phases with 3 people each who had very different scientific backgrounds. Here we could spot obvious flaws and refine our questions to a point we were confident with. Final corrections were made by our supervisors Prof. Barbara Di Ventura and Dr. Nicole Gensch, who helped us put the focus on the most important aspects of our mission to fight fake science and receive feedback from the science community. They dissected each and every idea we had and left just perfection. 

The survey was distributed via Twitter and other social media channels and went to several faculties to be forwarded on request (faculties include biology, chemistry, biochemistry, medicine, molecular medicine, physics/mathematics, public health management...). We were able to attract almost [150] participants. If put into perspective with the possible number of participants (estimated people made aware: 40.000), the ratio is very low (0.375%). It was explicitly addressed to people doing actively scientific work. 


The first questions were generally aimed at demographic and background of our participants. We reached a broad spectrum of age groups with the majority being 25-34 years old, as shown in Figure 4

Figure 4:

We wanted to reach participants in various stages of their scientific careers to get a representative impression of the scientific community as a whole. The relatively even distribution of participants being masters' students, PhD-candidates, Postdocs or professors can be seen in Figure 5.

Figure 5:

As our survey was mostly aimed and distributed to people in science, more specifically biology related fields. This is reflected in the academic backgrounds of our participants, with more than half coming from the field of biology. Figure 6 shows this distribution.

Figure 6:

This concludes the information about the demographic of your study and allows us to evaluate the results in relation to the audience we reached.


After having dealt with the general questions about demographics, we wanted to start our survey off with dilemma-questions. These questions do not have a right or wrong answer but inspire people to explore their own feelings about research integrity and are supposed to set the tone for our survey. In total three dilemmas were presented to the participants.

The first question deals with the pressure as a researcher to produce specific results and is called "One additional experiment". The following figure details the question-and-answer options given.

Figure 7: Dilemma 1 - "One additional experiment": The paper presenting the data I collected is reviewed in a highly reputable journal. The reviewers, however, want to see evidence of a specific process. In fact, they spell out a specific design for a potential study along with the results that they would like to see. I run the recommended experiment, but the results do not match the expectations of the reviewers. What do I do?

It is apparent that the participants clearly favor the first and third option, shown in blue and yellow in Figure 7. As more than half of our participants come from the field of biology, we wanted to compare the biologists' answers to the rest of the participants. Overall, they show the same trend. About 69.7 % of biologists agreed with the first option shown in blue, while 57.3 % of the other fields chose that option. The third option in yellow had 27.9 % of biologists agreeing and 36.1 % of the others.


The next dilemma question is called "So close" and addresses the struggles that can come with publishing results together with a research group. The figure below shows the question and results.

Figure 8: Dilemma 2 - "So close": I am a co-author in a paper, which has already gone through a couple of rounds of revisions. Suddenly I discover an error/omission in data analysis by one of the co-authors that had escaped my attention previously. At this point, the paper has almost been accepted for publication, and the reviewers have never made any remark about the data analysis. I know that my co-authors do not want to miss out on the chance to publish. I was not the prime person responsible for this part of the data analysis. What do I do?

For the dilemma represented in Figure 8 the participants are split between option 3 in yellow and option 4 in green, but also the second option in red got quite a few people agreeing. Again, the trends are the very similar for participants with a background in biology compared to other fields. One noticeable difference is that no one from the biology group chose option one in blue, while almost 3 % of participants from other fields chose that option.


The last dilemma question asked is about dealing with unexpected outliers and as called "Outliers". A version of this dilemma was also presented to all our interview partners on the topic “research integrity and scientific good practice”.  They had very different takes on this scenario and raised important points. If you want to see their insightful answers check out the interview transcripts here: link to interviews

Figure 9: Dilemma 3 - "Outliers": When screening my data, I find that there is one extreme observation/outlier. What do I do?

The majority of participants decided for the second answer in red and this does not change if analyzed by background, biology compared to the others as depicted in Figure 9.

This concludes the dilemma questions part of the survey. For the scenarios given there was variety in the participants’ answers, which shows that scientific good practice in these scenarios is subjective and there is no “one right” answer. This showcases how important it is to have discussion about research integrity because not everyone has the same definition in mind. In the next section we wanted to get the participants opinions on research practices and what they consider improper research. 

Research Evaluation

The first question, represented in Figure 10 shows that most of our participants have experience with publishing research, as the vast majority has at least co-authored a paper and almost 100 participants even published their own scientific research before, as seen in the figure below.

Figure 10:

Next, we asked about a few scenarios that could be considered bad practice in science. The data in Figure 11 shows that many participants found themselves confronted with questionable findings. Yet most participants answered with “No” to the question whether they have ever seen obviously fake data.

Figure 11:

Most people could relate to questioning conclusions drawn from results and the interpretation of data. Also, often participants were skeptical of the experimental design. 


The question depicted in Figure 12 is about a sensitive topic, as participants are asked to pick situations in which they themselves might have not chosen the best way to handle their research. It is noticeable, that more participants admit knowing someone that has falsified their data, then admitting having done so themselves, as the Figure 12 indicates.

Figure 12:

The question in Figure 13 is especially interesting as it relates to the results of the dilemma question. For the dilemmas it was not obvious what is considered improper research practice by the participants. This question showcases that finding again. Three sceneries were given and for each, the participants are split on whether it counts as improper research or not. Most noticeably, excluding outliers seems to be viewed as situation where each case could be different, and the circumstances decide on scientific good practice.

Figure 13:

Overall, this section on research evaluation showed that our participants have experience with publishing data, yet what is considered research integrity and scientific good practice can vary a lot.

Research Integrity 

The third part of our survey concerns our participant's outlook on research integrity, what could be a threat to scientific good practice and possibilities to protect it. 

In the first question participants were asked to rate reasons for data falsification by how severely they think they contribute to the problem. The highest-ranking reasons among our participants were the prestige offered by high impact factor journals if you can provide significant data and personal ambitions to perform and deliver high impact results. Also, outside pressure was highly ranked. All these factors come from a common motivation of wanting recognition and prestige. Political reasons for data falsification were ranked the lowest. Figure 14 below shows all results.

Figure 14:

We can conclude from this question that researchers often find themselves in high pressure situation, due to internal or external reasons and these levels of stress entice scientists disregard research integrity.

As a follow-up to this question, we suggested measures to protect integrity in science to our participants and wanted to know how effective they think they are, shown in figure Q13.  The highest ranked option goes hand in hand with the biggest problem identified in the previous question; the participants chose “less pressure to publish” as their most effective option. We kept the answer intentionally broad, as there are many ways to reduce outside or inside pressure for scientists and each researcher/ research group can find a strategy that works for them. The measure rated least effective was “more training in research integrity” which was unexpected for us. Considering the broad spectrum of answers in the previous sections about dilemmas and evaluation research practices, it seems the participants are split in their view of what constitutes scientific good practice. All answers are shown in the Figure 15 below.

Figure 15:

With the last question in this section, we investigate how critically participants deal with sources. Participants were allowed to tick multiple answers. The results and possible answers are depiected in figure 16.

Figure 16:

Option 5) with almost 60% was the most chosen option. This option entails doing extra research and looking for unrelated additional sources, which is a sign that the majority of participants is dealing critically with sources and is willing to invest time into fact-checking their data.


Integrated Part

The last section of this survey dealt with questions that give us direct feedback and that we can put directly into our workflow and the wiki. The first question asked general questions about appearance and usability of presenting scientific data on a website. As all questions in this section, it was answer to on a scale between 1 (less important) to 5 (very important). As shown in Figure 17 the highest approval rates and most important on a scientific website for our participants was that our website shows “Intuitive and easy to understand design (easy to find content you are looking for)” and “Good functionality (fast web loading content; etc.)”. In next place with a significant importance the “Modern look and design” is found. “Interactive design” as well as “The content should resemble a scientific publication (similar style and format)” found some approval and “A website with warm welcome” none. 

Figure 17:

For the next Question in Figure 18 which was aimed at opinions about experimental design, we can observe that “a clearly written experimental procedure” has the highest approval rate. It is followed by “Experimental design should have explanation of its purpose”. A measurable but mediocre importance is also given “proof of concept for each new methodology used”, “at least two techniques to verify the conclusions” and “a high number of repetitions”. Last and least important is “computational proof of concept (simulations)”. 

Figure 18:

“Data analysis should...” in Figure 19 shows that it is most important for our participants to “follow (its) standard protocol” and after that “focus on quantitative results”. With less approval follows “focus on qualitative results”. 

Figure 19:

Last questions in Figure 20 concerning representation of figures shows a high importance that they should be “color-blind friendly” and “zoomable”. “In colours” wins over “in black and white (or grey scale)”. The juxtaposition of whether we place the figures within or next to the text revealed that figures are preferred “shown next to the relevant text”. The option “additionally be available in a slideshow at the beginning/end of the page” has not found so much importance. This is parallel to the question if figures should “used rarely/sparingly (Only the most important ones should be used in the main text)”.  

Figure 20:


Overall, this survey gave us valuable insights into science integrity and the opinions and experiences of colleagues in science. We learned how the concept of good scientific practice is perceived and applied in other research groups and has helped us in our quest of fighting bad research practice, since we are more aware of possible mistakes that we can make during experiments, data analysis and publishing. We were able to identify key factors in science integrity that build the foundation to a trustworthy environment. Since this survey was completely anonymous, we will consider these answers to be honest reflections of the participants thoughts.  


The survey featured several questions that posed dilemmas that have no right or wrong answer but are rather designed to resemble a discussion than a multiple-choice test. By starting our survey with dilemma questions, we intended to prepare our audience for situations that are complex in their nature. Our participants often had to decide on what to sacrifice to either keep their face in front of colleagues or supervisors or admit their own mistakes. However, most participants responded that they would honestly communicate negative results, like in in the dilemma “One more experiment”, Figure 7, where most participants would either communicate negative findings or conduct further experiments. Just a small percentage of participants would omit unfitting results from supervisors or stakeholders. In the second dilemma, “so close” Figure 8, a similar landscape is presented. While participants chose different option when reacting to a mistake found in a paper they co-authored, most decided to communicate the observation in some way. Very few chose not to react at all to the observation.  

In the dilemma questions, as well as in the data shown in Figure 12 suggests that very few participants chose dishonesty or modification of data rather than openly communicating a mistake or a negative result. However, a significantly higher number of participants admitted knowing someone that published false results. This discrepancy is an interesting observation and could be due to two different reasons that we propose: scientists might often not purposely modify their data. Often, it is a subconscious decision induced by high pressure to perform, whether it is self-imposed or by external people. We as scientists might not always actively perceive our own misinterpretations or falsifications and are more aware of it when we read data from other scientists. A different hypothesis is that those scientists that are already more aware of science integrity and the possible pitfalls were more prone to take the time to participate in our study. A third possibility is suggested by Figure 13, in which we see that different people perceive different scenarios as actual scientific misconduct. The general disagreement on this question is surprising. Either scenario underlies the importance of an open communication between colleagues, in which pointing out possible discrepancies or mistakes is a habit rather than an attack or something devastating. Furthermore, it is important to encourage the discussion to establish a common ground on the perception of scientific misconduct in the scientific community.  

In general, many participants said to have perceived some kind of scientific misconduct. However, as shown in Figure 12, the largest part has never encountered truly fake data. In a field that has currently been shaken by the falsification of data severely affecting Alzheimer’s research, this conclusion is a very reassuring one.  

Integrated Feedback

We concluded the study with several questions on how to represent scientific data to make it as comprehensible as possible. We wanted to learn from the target audience that chAMBER was designed and engineered for – scientists. We were able to successfully integrate the resulting feedback into our wiki. We made sure that our content is always easy to find (see Figure 17). We achieved this by designing a wiki that can lead from the main page to any content in three clicks or less. Additionally, we thought about presenting large scale data and texts in a convenient way and choose the integrated pdf in site tool (as seen in the methods of this page). For good functionality, we compressed 3D models and images and therefore made web pages more accessible for slow internet connections and devices.  

To integrate our participants feedback on clearly written experimental procedures in question 16 (see Figure 18) we uploaded detailed protocol for all our lab procedures. Additionally, we decided to publish all our raw data with our labbook. This allows everyone to re-evaluate and verify our data, while at the same time providing an easy way for others to have a better insight in what we exactly did. This can be very useful when one wants to repeat and expand on our work.  

Furthermore, we integrated the feedback of the participants on figure design by eliminating barriers on our wiki, using colourblind-friendly colours in all our data-representing graphs, and we made sure all our figures were zoomable or at least in high resolution. This feedback was especially important to the contestants.