Select Page

3 Q&A interview with Jonathan Stone, Head of Northern Europe, MVA member Benchling

Aug 6, 2024

1) Why has Benchling decided to join MVA and focus on the Nordics and the Medicon Valley region?

For the past 3 years, we’ve been hosting a Nordic Digital Science & Innovation Day in Copenhagen. It’s one of those opportunities that everyone in our company truly anticipates, and that’s because we know that this region is on the leading edge of innovation in biotech, a real powerhouse. For Benchling, focusing on the Nordics and the Medicon Valley is about having an ear to the ground with the latest advancements in the field.

At this year’s event, I was impressed by customers of ours including Novonesis who shared how they’re building innovative models to support their biotech partners with structured CMC data, digital twins, and electronic batch records. AstraZeneca shared innovations in plasmid generation for their biologics R&D — what was once a manual, outsourced process that took 6 weeks per cycle has now been reduced to a 2- to 3-week process, in partnership with Benchling. A Senior Researcher from The Novo Nordisk Foundation Center for Biosustainability (DTU Biosustain) presented a novel software application to make ML more accessible and reproducible for protein engineering. We heard from the Head of IT at Zealand Pharma about balancing security and science from a digital perspective.

Tech and bio innovation is thriving in the Nordics, and we want to be an active player here.

2) How can Benchling help future-proof R&D informatics?

When asked to identify the major limiting factor in their work, scientists and bioinformaticians point to the challenge of siloed, inaccessible data. An enormous amount of data exists in R&D — but teams must be able to make sense of it today as well as five, even ten years later, especially now, with the demands of AI/ML. All too often it’s the case that the data are useless; they’re lacking metadata, or missing additional context from a former employee, or might require an outdated machine that no longer exists, to even read it.

The solution is to ensure that every piece of data is captured in a structured environment. For some of the larger organizations, if you look at their research ecosystem, you’re dealing with hundreds of different applications and software to collect or store data. Being able to tie all the data together and centralize the data, so you can run them through models, is actually quite challenging for most organizations.

As AI continues to improve and become ever more ubiquitous, so too will the need for a robust data infrastructure — one that captures standardized, centralized data to input directly into those models. Customers across the Nordics get real value today with the Benchling R&D Cloud for exactly this use case.

One great local example is Novonesis, they’ve been at the forefront, digitizing and unifying teams on a central platform with Benchling. Data scientists used to spend up to 90% of their time trying to reformat data to glean insights. Now, data is FAIR-ified, they have ease and greater access to data across Novonesis’ global team of scientists, they’re able to increase throughput to keep up with the new volume of data that comes with their innovation in strain engineering. This is how you future-proof R&D.

3) How do you think generative AI can bring new value to life sciences?

Much of the hype around AI has centered on drug discovery, but my excitement lies elsewhere: How can we use AI to achieve operational efficiency throughout the entire R&D lifecycle, so it doesn’t take a decade to get from discovery to a marketed product.

We all know that there’s an insane amount of drudgery in being a scientist, from the knowledge work with report creation and IND filings, to clinical trials, to data standardization. I hope that great technology, like AI and what we’re doing at Benchling, will help scientists be more effective, and get rid of the difficult administrative tasks that get in the way of doing actual science.

LLM-savvy scientists are already discussing experimental plans with chatbots, asking them to help identify sources of error, propose metadata they should track, suggest appropriate statistical tests for the data they generate, and reformat and summarize data from experiments, saving hours/days of legwork. OpenAI’s Code Interpreter and Benchling’s own Report and Chart Generation with LLMs are all good ones to start with.

This year, companies will focus on the ‘building-the-foundations’ stage to make AI/ML possible. This includes the systems to standardize and structure data, cultivating talent and the right skills, doing the hard work of cleaning data amongst all of biology’s curveballs. Creating data that’s fit for AI/ML will be a critical differentiator for success — and it will be worth the investment.

3 Q&A interviews