Developing a data and technology-driven flexible lab operations model




5 min read
Disclaimer: This article is written with a bit of tongue in cheek, although you may find some interesting information here, take this article lightly.

Let’s just start by breaking it down into smaller pieces… 

Digitalization means that in the end, we want some digital data that we could expect to be stored in a digital form, most likely for further usage. 

From source - uh… what is the source? Can we get exact data, and what does it mean? What do we actually measure?

To a scientist - who is or will be the scientist - is it a human, a computer program, or AI? Is it just one scientist or along the “data journey” it will be used by many? If so, where are they? Locally in a Lab or scattered all over the world? 

Well, it seems that there is no simple answer to those questions, and it is very hard to determine what kind of digital data will be needed, who will use it, and how. Is it even possible? Huh… in my opinion… yes, and no.

It is more like going for the ideal solution, when you already know that achieving this is impossible, but you are still trying hard. So maybe those questions we asked in the introductory paragraph are just wrong-stated? We should rather ask how to make the data itself more universal and available to whoever wants to use it. 

There are a lot of protocols and communication standards to achieve data transport and unification. Ex. MQTT, Sparkplug B, OPC UA, Namur MTP, or SILA2 are all protocols used in different industrial and automation contexts.

MQTT and Sparkplug B are lightweight messaging protocols designed for efficient communication between devices and systems, with Sparkplug B specifically tailored for industrial environments. OPC UA is a versatile and robust protocol enabling secure data exchange and interoperability across various platforms. Namur MTP is a standard for classifying field device modules, enhancing interchangeability. SILA2, on the other hand, focuses on structured communication for laboratory automation, facilitating seamless interaction between instruments.

While they share the goal of enabling efficient data exchange, these protocols differ in their scope, application domains, and technical features, catering to the specific needs of diverse industries and processes.

So, some effort has already been put into the unification of digital data in many fields, but still, the bioindustry seems to lack this and struggles to get there. In my opinion, one of the key reasons was the kind of hermetic ecosystems within companies.

You see, when biomanufacturing became such a big industry with a lot of technology involved (well, is there any field of bioproduction, where technology is not used nowadays?) most of the bio giants were keeping the data for themselves, thinking of it as a property, that like a treasure should be kept hidden in a secure place. But this was wrong and the industry realized that the data that is “in use” can bring even more value

I think the crack in this way of thinking was triggered by two main things. First - simply the world has changed. With the introduction of the World Wide Web, and especially the point when it became Web 2.0, driven by user-created content, people saw how successful cooperating can be. Second is that the technology and information itself became available to more and more people, allowing them to Do It Yourself (DIY) biomanufacturing and make science experiments. I still recall a mind-blowing TED talk by Ellen Jorgensen “Biohacking - you can do it too” from 2012 when the term “citizen science” was used to describe this movement. 

Now, when it becomes clear to the biotech companies that only by sharing the data they can thrive, everyone is looking for a solution. Just take a look at most biomanufacturing conferences nowadays - it is all about “integration, interoperability, and flexibility”. With this in mind, when talking about a flexible lab operating model, we should rather propose some guidance than particular solutions.

Don’t be afraid and open the source

Let everyone have it, keep an eye on it, drive the change, and try to keep it clean, but it is the community that will give you the reach.

Do not underestimate amateur scientists

People tend to surprise - you never know, maybe some kid from another part of the world can help you in solving your problem.

Help the data

Do not hold it back, when it wants to live its own life - The Data wants to see the world and meet new people.

The better data you will provide the better results you can expect.

In other words, following David McCandless's TED Talk from 2012 (“The beauty of data visualization”) I would definitely say that in 2023 the data is the soil, flourishing with new technologies and concepts, and we are the ones to cultivate it - plant, fertilize, irrigate, and provide the best conditions to grow and raise fruits.

Read more on the Knowledge hub

5 min read

Krzysztof Kaczor

Few Thoughts on IIoT Security

Krzysztof Kaczor

5 min read

Klaudia Kożusznik

Still biotech or already techbio?

Klaudia Kożusznik

5 min read

Karolina Marzantowicz

Getting Ready for Quantum Computing — basics edition

Karolina Marzantowicz