Tools & Techniques Analytical science

Thinking Big

sponsored by Thermo Fisher Scientific

You’ve come to clinical translational science from a background in biochemistry. How has that influenced your work?

To some degree, in translational science it doesn’t matter where you start – we all need to know a good deal of both basic and clinical science. Coming from the biochemistry side, I have an inherent understanding of proteins, and I know how to make assays and technology work. But I had a lot to learn about the clinical side – the pathology of the disease and how people are treated – so that I could find the gaps in our knowledge. I was lucky to have great clinical mentors and collaborators who taught me to think from their side.

What is your lab’s goal?

Our motto (yes, we have a motto!) is “from discovery to patient care”. Our philosophy is that you have to start from a compelling clinical question, then apply (or invent) the tools to answer it. The primary goal of our interdisciplinary Advanced Clinical Biosystems Research Institute is to apply proteomic technologies and methods from discovery to clinical application. The Institute brings together an incredible breadth of knowledge, expertise and skills, which I believe creates the right environment for translational research. In other words, we have the clinical and technical know-how to be able to move things forward more efficiently. The pharma, instrument manufacturer and biotech industries are a vital part of that process, so our second aim is to work closely with companies to develop and scale up tools in this area. To make sure that we’re asking the right questions, we need a magic combination of people, so the third part of our mission is to train the next generation of translational scientists. The scientists being trained now didn’t exist when I started out – today, they can float effortlessly between basic science, assay development, epidemiology and hardcore clinical chemistry. We don’t want our scientists and students to simply be cogs in the machine – they need to have a broad understanding and the confidence to ask questions, to make sure that the data coming through translational pipelines are high-quality.

You’ve come to clinical translational science from a background in biochemistry. How has that influenced your work?

To some degree, in translational science it doesn’t matter where you start – we all need to know a good deal of both basic and clinical science. Coming from the biochemistry side, I have an inherent understanding of proteins, and I know how to make assays and technology work. But I had a lot to learn about the clinical side – the pathology of the disease and how people are treated – so that I could find the gaps in our knowledge. I was lucky to have great clinical mentors and collaborators who taught me to think from their side.

What is your lab’s goal?

Our motto (yes, we have a motto!) is “from discovery to patient care”. Our philosophy is that you have to start from a compelling clinical question, then apply (or invent) the tools to answer it. The primary goal of our interdisciplinary Advanced Clinical Biosystems Research Institute is to apply proteomic technologies and methods from discovery to clinical application. The Institute brings together an incredible breadth of knowledge, expertise and skills, which I believe creates the right environment for translational research. In other words, we have the clinical and technical know-how to be able to move things forward more efficiently. The pharma, instrument manufacturer and biotech industries are a vital part of that process, so our second aim is to work closely with companies to develop and scale up tools in this area. To make sure that we’re asking the right questions, we need a magic combination of people, so the third part of our mission is to train the next generation of translational scientists. The scientists being trained now didn’t exist when I started out – today, they can float effortlessly between basic science, assay development, epidemiology and hardcore clinical chemistry. We don’t want our scientists and students to simply be cogs in the machine – they need to have a broad understanding and the confidence to ask questions, to make sure that the data coming through translational pipelines are high-quality.

What is the biggest block in that pipeline?

We need to measure proteins accurately – including all their modified forms – and at a larger scale than we have done in the past. Others have referred to our work as population-scale proteomics – I don’t think we’re at that level just yet, but to get there we need to be able to analyze thousands of samples, with really good accuracy and low variance.

I often end my conference talks by asking researchers what they would do if they could suddenly add a zero to the number of samples that they are sending to proteomics labs. What if, instead of analyzing 10 disease samples and 10 controls, you could do 200 (or even 2000 or 200,000)? And what if you could measure changes of 20 percent, as opposed to the two-fold changes that we can detect now? That is the near future of proteomics.

Why is mass spectrometry such a valuable tool?

For now, ELISA is typically the preferred method for less complex analyses. But there are a number of situations where mass spectrometry (MS) provides better data.

One is post-translational modification. For example, take a protein that has undergone citrullination – an irreversible post-translational modification, which causes major changes in the conformation of a protein, and can render it auto-antigenic. If citrullinated proteins within an organ are released into the blood stream as a result of injury or inflammation, they can generate an auto-antibody response. Auto-antibodies against citrullinated proteins are present in the majority of patients with rheumatoid arthritis, and it’s thought they may play a role in other autoimmune conditions. For example, damage to the heart can release citrullinated proteins from the heart muscle into the blood – if auto-antibodies are formed, the immune system may attack the heart. Monitoring levels of citrullinated proteins could identify the potential to develop auto-antibodies against different components of the cardiovascular system.

In this case, you have a modified (citrullinated) and unmodified form of the protein, and you need to measure both. Using ELISA would involve two separate assays and all the resulting issues around normalization. But with a mass spectrometer you can measure the unmodified and modified forms in a single assay with a single set of standards, which is much more robust. And you don’t have to stop at one protein: you can study post-translational modifications in several different proteins in a single, 10–20 minute assay. Imagine trying to harmonize all of those with ELISA – it would be a nightmare!

An exciting emerging area for MS is studying protein isoforms. Isoforms can be very complex; for example, the protein tropomyosin has four isoforms and over 20 sub-isoforms. Isoforms may be at low concentration or expressed transiently. There’s a reason your cells make those isoforms and sub-isoforms, and they could have major effects on physiological and pathological processes. Right now, we’re working with Thermo Fisher Scientific and other companies on strategies to enhance quality identification and quantification of protein isoforms. It’s a whole world that most of us haven’t really explored, and the only way to understand what is going on is through proteomics. Fortunately, tools are being built and are at our disposal that allow us to dream bigger than ever before.

Will an increasing number of clinical and translational labs add MS to the toolbox?

I sure hope so. In the grant applications I review, I already see more frequent and sophisticated proteomics being incorporated. At present, sample numbers tend to be small, and that is where I feel there is a lot of untapped potential.

As more research laboratories adopt MS as a tool, mass spectrometers will become more “plug and play”, which is a crucial transition for their use in clinical labs. Fortunately, sampling is becoming easier and more convenient with new blood and plasma micro-sampling technologies, so we can more easily access populations who can’t come into the hospital regularly. My dream is to have mass spectrometers that could be used in CROs or pharmacies everywhere, enabling large-scale studies that reveal the true biological variability within and across populations in a way that’s never been done before. The more populations we look at, the more we’ll understand what is happening in a cell, so we need to engage with scientists in other areas and get them using our tools. That’s the future we have to build, and to get our instruments and methods out there they have to be robust.

How can we speed things up?

We’re revamping our whole biomarker development pipeline. As things stand, discovery cohorts are independent of verification and validation cohorts. We might start with 100 samples in discovery, and spend a year working on those. Then we stop and decide on which biomarkers we should take forward to validation. We have to use different MS-based approaches in discovery and validation experiments.

We propose that it would be more effective if discovery and validation was a continuum with one workflow. For example, by increasing our discovery cohorts to 500 samples so we can also validate within the same large cohort. That is, if the number of samples is large enough. It will make both processes more time-efficient, but offer the same depth and even better quantitation. By analyzing more samples during discovery and validation, we should also capture more of the biological diversity and heterogenic responses, which should eliminate some of the surprises that can happen as we move biomarkers into applications in clinical labs.

How do you tackle variance?

We have to carefully consider the extent of biological and pathological variably within the clinical domain being targeted. Thus, we need to think about what samples are chosen, and how big the sample bank needs to be so that we can capture the full variability of the healthy versus disease spectrum and any potential overlaps with other disease states.

Variance can also occur in the work flow system, everywhere from sample preparation, to running and harmonizing our mass spectrometers, to how we block and randomize our samples. By controlling technical variance, we can get a better handle on variations in biology.

Our goal is to implement “real-time” quality control (QC) for each step in the workflow. In this approach, as the samples go through the mass spectrometer we attempt to determine if all the QCs are on target. If they’re not, we try to find out what’s wrong with the sample and correct it immediately – similar to the strategies in place in a clinical chemistry lab. Depending on the outcome of the QC we may have to restart from sample preparation. The key is to get the sample back into the mass spectrometer without too much delay.

How close are we to understanding variation between people?

Biological variability is hugely complex, as it includes person-to-person variability, the variability of the disease itself and any number of comorbid diseases. To realize the concept of individualized or precision medicine, we need to be able to look at not just one or two proteins, but an array of proteins and their modified forms. That’s going to mean collecting a lot of data, all of which has to be accurate and, when possible, quantitative. When people talk about big data, it concerns me when it isn’t followed by the words accuracy and precision – which is most of the time. I’m not qualified to speak for other fields but, when it comes to proteins, I know we need to be able to detect even a small change and know that it is real.

There are an awful lot of people in this world, and each one is unique. To understand how those differences influence the course of a disease and how each individual will respond to a given treatment is a huge challenge. But that’s the fun, exciting and remarkable place where we are in science today.

Jennifer Van Eyk is Director of the Advanced Clinical Biosystems Research Institute at Cedars-Sinai Medical Center, Los Angeles, CA, USA.

Receive content, products, events as well as relevant industry updates from The Translational Scientist and its sponsors.

When you click “Subscribe” we will email you a link, which you must click to verify the email address above and activate your subscription. If you do not receive this email, please contact us at [email protected].
If you wish to unsubscribe, you can update your preferences at any point.

About the Author
Jenny Van Eyk

Jenny Van Eyk is Director of the Advanced Clinical Biosystems Research Institute at Cedars-Sinai Medical Center.

Related Solutions
Powering Proteomics: E-book

| Contributed by SomaLogic

Productivity enhancement with liquid chromatography solutions

| Contributed by Thermo Fisher Scientific

Precision medicine and large-scale plasma protein profiling

| Contributed by Thermo Fisher Scientific

Register to The Translational Scientist

Register to access our FREE online portfolio, request the magazine in print and manage your preferences.

You will benefit from:

  • Unlimited access to ALL articles
  • News, interviews & opinions from leading industry experts

Register