AIDS red ribbon

The Molecular Detective

The Molecular Detective

SARS-Cov-2 is not the only virus responsible for a pandemic this century; the human immunodeficiency virus, more commonly known as HIV, is another. Unlike SARS-Cov-2 however, HIV infection cannot be cured; transmission is irreversible. Despite the significant progress researchers have made in understanding this virus, it has not been eliminated; in many developing countries, the HIV epidemic is still flaring.

What is HIV/AIDS?

HIV is a virus acquired via contact with contaminated bodily fluids, and infects a particular kind of immune cell called CD4+ cells, or T-helper cells. These cells are a crucial component of the immune system, as they are responsible for bringing all other immune cells and their effector responses together; they are the commanders. When HIV infects these cells, it reprograms them to produce more viruses which can go on to infect more CD4+ cells; T-helper cells become HIV factories, and in the process, they die. After the initial exposure to HIV, the process during which the virus replicates can take years, during which patients often do not show symptoms. The real problems caused by HIV appear later: once lost, CD4+ cells are not replaceable. Because of their critical role in immunity, infected patients’ immune systems become severely compromised, making them not only extremely susceptible to infections, but also unable to fight them. After reaching a critical threshold whereby almost all CD4+ cells have died, patients reach a stage in which they are severely immunodeficient, a stage better known as acquired immunodeficiency syndrome, or AIDS.

How HIV infects

As mentioned above, HIV targets a specific kind of immune cell called CD4+ cells (T helper cells); this is because one of the molecules found on the outer shell of the virus, called gp120, mainly binds a surface protein unique to these cells: the CD4 protein. When binding of gp120 to CD4 occurs (step 1 in image below), it allows fusion of HIV’s membrane with the T cell’s membrane (step 2), during which all the viral material contained inside of the HIV membrane is injected into the T cell. Just like most viruses, HIV contains all the necessary components to further replicate itself; it then uses the infected cell’s machinery to carry out this task. In other words, the virus brings its own ingredients, but uses the cell’s kitchen. Once inside the cell, it starts by turning its genetic material, in the form of RNA, into DNA, with the help of its special enzyme called the reverse transcriptase (step 3). This double-stranded DNA molecule is then transported to the cell’s nucleus, where the T cell’s DNA is located, via another enzyme called the integrase. The integrase not only brings the DNA strand to the nucleus, it also cuts the cell’s DNA to integrate HIV’s DNA inside it. This is the point of no return for the infected cell, as viral genes are now encoded in its very own DNA (step 4). This is where the virus utilizes the cell’s machinery to its own benefit; the T cell transcribes the DNA (step 5), then translates it into new viral proteins. Essentially, the virus gave the cell its ingredients and a recipe to follow, and the cell then does all the cooking, providing newly synthesized viral proteins for the virus to replicate. When all the necessary proteins are synthesized, they are assembled into a new particle (step 7), which fuses with the cell’s membrane (step 8) and is subsequently released to infect other cells (step 9).

HIV infection cycle

Diagnosis & Treatment

There are currently three ways to detect an HIV infection: an antibody test (with an ELISA for example), an antigen/antibody test, or a nucleic acid test. The first two are most reliable several weeks to several months after exposure, as they require the mounting of an HIV-specific immune response from the patient, which is not immediate. The latter test gives a measure of the amount viral genetic material in the blood; this one can yield results much earlier, but is much more expensive and less commonly conducted for diagnosis. It is however the gold standard test for patient monitoring after the diagnosis.

Indeed, though HIV infections cannot be cured, they can be treated well, in a way that avoids illness and that limits transmission. Once a positive diagnosis is made, patients are offered antiretroviral (ARV) treatment to reduce viral replication in the blood. If taken consistently, this treatment reduces the viral load (the amount of viral RNA in the blood) to an undetectable level, enabling a healthy life for patients. ARV has shown to be very effective, but to guarantee patients’ health, monitoring viral load is crucial to ensure the treatment is working long term. Unfortunately, viral load monitoring is not the standard practice in lower income countries, mainly due to its cost and poor accessibility. Although diagnosing HIV infection is a necessary first step, regular monitoring patients with HIV remains the true bottleneck in resolving the AIDS epidemic in developing countries.

The Next Step

A cheap and accessible test that can rapidly and sensitively detect the presence of HIV in the blood: this is the holy grail that the HIV epidemic has been searching for. Today, one technology could provide some relief to the burden that patients living with HIV face: focal molography.

Current viral monitoring is done by extracting genetic material from cells in a patient’s blood sample. Infected cells have incorporated HIV’s genetic sequence into their own DNA: by extracting DNA from cells in a blood sample and testing it, clinicians can know how much of the virus’ genetic material is present in a given amount of blood. This test is extremely sensitive, as the molecular interactions used for the read out are extremely stable and robust. For this reason, should the test indicate that no viral genetic material could be detected, it is safe to assume that the patient’s viral load is extremely low and negligible, meaning that the virus cannot by further transmitted and that the patient’s immune system is protected. However, should the test detect genetic material from HIV, the patient’s health could be compromised and there is a risk of transmission.

Conducting this test is expensive and impractical; it requires specialized laboratories and is most commonly not conducted in clinics, particularly in developing countries. Furthermore, there can be significant waiting times between the time of the blood sample submission and the time of the test read out, which can greatly affect patient outcome. It is therefore not adapted for patients needing to be tested several times a year.

Perhaps the main constraint faced with viral load monitoring is the need to test for viral RNA; this requires many intermediate steps (RNA needs to be isolated to be measured), making it impossible to test a patient’s blood sample directly in a clinic. One way to overcome this problem could be to use a different molecule to measure the presence of HIV, ideally a molecule that can be detected without any isolation steps, such as a surface protein expressed by HIV. One such protein is gp120, a surface protein largely responsible for the infectivity of the HIV virus. Another is p24, a structural protein that makes up most of the HIV viral core, or ‘capsid’. Detecting the presence of these proteins could be used as a proxy for the integration of HIV’s genetic material in a cell’s DNA.

Detection of viral proteins in a blood sample could be performed within minutes using focal molography. For example, with a mologram composed of gp120’s binding targets (e.g. CD4 proteins), cells expressing gp120 on their surface (i.e. cells infected with HIV) can easily be detected. This change of detection mechanism comes with some compromise: developing such a test would likely require establishing new clinical threshold values and ranges. Furthermore, it would provide only an indirect and perhaps less accurate read out for the presence of HIV, instead of the direct and irrefutable read out of a nucleic acid test. Nevertheless, focal molography would provide sufficient information for clinical purposes, and would additionally greatly facilitate regular HIV monitoring, as it would enable testing in clinics with immediate read out. This is arguably more important for resolving the AIDS epidemic; a less direct test is better than no test at all, which is currently the situation many patients face in developing countries.


HIV/AIDS is just one of many diseases in which focal molography could play a pivotal role. The speed of its read out, its sensitivity of detection, and its ease of use could prove not only to revolutionize diagnostics and therapeutics in the Western world, but also provide new possibilities in developing countries.


Special thanks to Dominique Braun from USZ for his helpful insight on HIV infections.

Al-Jabri AA. How does HIV-1 infect a susceptible human cell?: Current thinking. J Sci Res Med Sci. 2003;5(1-2):31-44. |

Fauci, A. S., Pantaleo, G., Stanley, S., & Weissman, D. (1996). Immunopathogenic mechanisms of HIV infection. Annals of internal medicine124(7), 654-663.

Deeks, S. G., Overbaugh, J., Phillips, A., & Buchbinder, S. (2015). HIV infection. Nature reviews Disease primers1(1), 1-22.

schematic of focal molography

Focal Molography - A New Frontier in Biosensing

Focal Molography – A New Frontier in Biosensing

Focal molography was introduced in the latest blog article as a novel technology capable of detecting molecular interactions, without acquiring any environmental noise (e.g. temperature gradients, buffer changes, and nonspecific binding). As such, it overcomes one of the most significant hurdles label-free biosensors have faced to date, and opens the door to a new erof biosensing. 

Advantages of Focal Molography

Focal molography is different than established label-free biosensors in that the molecules themselves assemble the signal generating structure. The core concept of focal molography lies in its use of a pattern of binding sites, a mologram, that has the interesting property of assembling analyte molecules such that they diffract a laser light beam into a specific point in space, the focal point. By detecting the intensity of this point, we can observe all molecular interactions happening on the grating simultaneously.  Molecules with no affinity to the binding sites on the mologram (i.e. molecules we are not interested in) do not assemble on the grating; because of this, the light these molecules diffract will not be concentrated into the focal point, but rather diffracted in random directions. This implies that interactions we do not wish to detect, i.e. environmental noise, will not be detected. With almost total rejection of environmental noise sources, focal molography brings a vast array of advantages to the tablethe most important ones are discussed below. 



While most refractometric biosensors such as SPR intrinsically measure and are therefore highly affected by environmental noise, focal molography remains immune to these perturbances of the signal. Indeed, as discussed in the article dedicated to SPR, refractometric biosensors detect molecular interactions based on slight changes in refractive index in the entire volume of the evanescent field. In refractometric biosensors, this volume is disproportionally large compared to the volume of the molecules present. This feature leads to the important consequence that any small change in this field (e.g. due to environmental noise) affect the sensor’s output significantly.  This is not the case in focal molography: due to its nano-sized, self-referencing principle, focal molography provides unique signal stability, remaining largely unaffected by environmental influences. 



In any other biosensor, signal acquisition Is affected by environmental noise and drift. It is therefore often required to reference the measurement with a secondary, parallel sensor to account for these external influences and obtain the desired readout. Focal molography is what we call “self-referencing”; this means that instead of having to compare its initial values to a known stable value from an external source from a reference channel, it can stabilize values on its own. No pre-equilibration with buffer or temperature stabilizations of the sensor are required for the measurement, which allows it to be immediately started. No matching of the refractive indices of different buffers is needed as refractive index jumps are not detected.



Focal molography is compatible with complex media, such as cell culture media and buffers, and can thereby easily be used in combination with cell-based assaysA robust technology is crucial for analysis of crude and unprocessed biological samples such as body fluids. This offers endless possibilities, as it provides a new source of important data for scientists and clinicians. 



Biological interactions monitored in real-time (as they are happening) with focal molography. What added benefits does this bring? On a purely molecular front, real-time measurements provide information on the affinity of a molecular interaction, which can be useful information for scientists. Additionally, measuring in real-time offers insight on small and/or rapid changes occurring in the sample, which would be impossible to detect retrospectively. Finally, real-time measurements provide immediate information; this aspect is crucial in diagnostics for example, where focal molography would enable immediate readout of viral/bacterial loads in patients, but also regular screening for disease biomarkers. 



One of the earliest blog articles discussed the advantages of label-free biosensors compared to their label-based counterparts. Briefly, label-free technologies detect the presence and/or activity of molecules of interest based on their biophysical properties, such as molecular weight, refractive index or charge, while label-based technologies require a special tag (usually another molecule), to be attached to them. The use of a label can not only alter the intrinsic properties of the molecule of interest (which can compromise the read-out), but also involves an additional and non-straightforward preparatory step. Label-free technologies are therefore of preference. 



Due to its label-free nature and its nano-sized principle, focal molography is very wellsuited for multiplexed assays, whereby multiple assays can be run in parallel. This contributes not only to the speed of readout obtention possible, but also to amount of information that can be yielded from a single chip. 



Although many biosensing technologies have been developed thus far, not many have shown potential for miniaturization. Why? Surprisingly, although insufficient sensitivity is commonly assumed to be the bottleneck, this is mostly not the reason behind the impediment of their miniaturization. Here again, the limiting factor is their cross-sensitivity, in particular to environmental influences: it creates a largely disproportionate ratio of sensing volume to the size of an individual molecule. Current sensors loose sensitivity when miniaturized because they can no longer be stabilized to minimize the influence of external signal sources on the signal. However, as focal molography requires no external stabilization thanks to its self-referencing principlea miniaturized reader will have the same performance as a large benchtop instrument, enabling miniaturization without compromising on accurate readout.


This article sheds light on the main advantages focal molography brings to the world of biosensing. Figure 2 shows a comparison between focal molography and other current biosensing technologies, including SPR, for the attributed discussed in this article.  

comparisons chart

As it was hopefully demonstrated in this article, focal molography offers a novel set of possibilities for biosensing. But: What can this technology do that has not been done? How can we put its groundbreaking attributes to good use? Find out in our next article!



Frutiger, A., Fattinger, C. and Vörös, J., 2021. Ultra-Stable Molecular Sensors by Sub-Micron Referencing and Why They Should Be Interrogated by Optical Diffraction—Part I. The Concept of a Spatial Affinity Lock-in Amplifier. Sensors, 21(2), p.469. 

Frutiger, A., Gatterdam, K., Blickenstorfer, Y., Reichmuth, A.M., Fattinger, C. and Vörös, J., 2021. Ultra Stable Molecular Sensors by Submicron Referencing and Why They Should Be Interrogated by Optical Diffraction—Part II. Experimental Demonstration. Sensors, 21(1), p.9. 

2019 Kübrich Molographic Peptide Arrays Towards Label-free Protein Signaturing in Undiluted Blood Plasma 

An Introduction to Focal Molography

An Introduction to Focal Molography

The previous article delved into the world of signal processing, where the challenges we face with environmental noise were underlined. The possibility of acquiring and subsequently processing data in Fourier space was discussed as a potential solution to this problem. But has any such technology ever been developed? In this article, we describe one that has: focal molography. 

The Why & The How

As evidenced by our previous articles, the core aim of biosensors is to detect molecular interactions in their natural milieu; this is challenging task, as the targets we wish to detect are often extremely discrete in environments populated by many other molecules. Many biosensors have been developed thus far to discriminate target molecules from their less relevant counterparts; however, to our knowledge, no sensor has been capable of doing so without also detecting the environmental noise surrounding the molecules of interest. Recently, a new technology has successfully overcome this challenge, allowing it to solely sense molecular interactions, without detecting any environmental noise. The secret to this technology is a concept already well-known by most: holograms. 


Molecular Holograms for Light Diffraction 

Although many would picture a hologram as an arbitrary 3D object or person made of light, a hologram is simply a pattern that scatters light. The key physical phenomenon occurring in holograms is diffraction, whereby light waves “bend” around obstacles. A typical example for this is a beam of light passing through a thin slit, as shown below.

light diffraction through a thin slit

By arranging the obstacles in the path of the light beam in a spatially regular, or “coherent pattern, the diffraction from the pattern can be precisely tailored in specific directions.  Such coherent patterns are commonly used in the form of diffraction gratings, which is none other than a very simple hologram-namely, a pattern on which a deflected light beam is generated. Should these gratings be created using molecules, we can also generate a pattern capable of scattering light; in short, we create a molecular hologram, or a mologram (cf. figure 3a). 

Refractometric vs. Diffractometric Biosensors 

As a reminder, typical biosensors are surfacebased devices, where a molecule specific (capture probe) to a target of interest is immobilized onto a polymer coating.   This capture probe is in contact with a complex biological sample, containing a vast array of irrelevant molecules. Detecting only the molecules of interest in this sample is the main challenge we are faced with. 

As briefly described in the article dedicated to surface plasmon resonance (SPR)many different biosensors utilize the same bio-physical property for molecular detection: the relatively high refractive index of biomolecules compared to water. Here, we distinguish two categories of biosensors: diffractometric (b) and refractometric (a). Both label-free diffractometric and refractometric sensors make use of the high refractive index of biomolecules to detect them, but the key difference between these two groups is how the molecules bind the sensor surface. Indeed, while molecules arbitrarily bind the sensor surface in refractometric sensors, they bind in the form of a molecular hologram in diffractometric sensors. Another important difference is that compared to refractometric biosensors, diffraction-based biosensors are almost completely unresponsive to background signal in the absence of the molecule of interest. 

principle of refractometric vs. diffractometric biosensors

Using Molograms to Filter Noise 

Why is it helpful to create molecular patterns, or specifically, molecular gratingsAs mentioned above, the light diffraction caused by grating patterns can be tailored. This feature allows us to arrange molecules on the grating, such that the diffracted light is condensed in one single voxel in space.  

Figure b above depicts this phenomenon: when a laser beam is deflected onto a molecular hologram organized on a grating-type pattern, the light is diffractedThe part of the light beam diffracted by the molecular grating is of particular interest; due to the coherent arrangement of this pattern, the light is also scattered coherently, and is focused in a specific spot that can be measured. So, should this grating be exposed to a biological sample containing an analyte of interest, the target molecules would bind the capture probe only on the ridges. Any other molecule contained in the sample would be found in the grooves; these molecules will also cause light diffraction. However, these light waves will not be diffracted but rather scattered in all spatial directions (i.e. not in the focal point), as the molecules found in the grooves are randomly located. The implications of this are crucial: if we measure signal in the focal spot alone, we solely detect molecular interactions happening on the ridges. In other wordswe detect only the molecular interactions of interest.

Still confused? An analogy to put this into perspective.

principle behind fingerprint scannerIt may still be unclear how molography fits into the signal processing theory discussed in the previous article. Fourier space was described as a superior alternative in the context of data acquisition as it enables the possibility to filter out noise very simply, contrary to noise filtering in real space. How does molography implement this? The mologram forms a diffractive lens that produces a Fourier plane image only a few hundred microns away from the surface of the biosensor. If this is still confusing, here is a simplified way to understand it. Think of a fingerprint reader; when you place your finger onto the scanner and it displays an image of your fingerprint, only the ridges, which contain the relevant information for biometrics, are shown on the image. This is because the grooves do not come into contact with the imaged surface. One way of seeing this is that the inherent pattern of your finger retains the relevant information and naturally “filters out” the irrelevant information found in the grooves. In short, the pattern itself is a filter! In this respect, molography is the same; by utilizing patterns of molecules, we detect the interactions we are interested in without detecting other analytes. 


This article introduces a new type of diffractometric biosensor capable of discriminating signal of interest from environmental noise. But what differentiates this technology from other diffractometric biosensors? What other advantages does it possess? How does it compare to other biosensors? The next article will be dedicated to answering these questions! 


Frutiger, A., Fattinger, C. and Vörös, J., 2021. Ultra-Stable Molecular Sensors by Sub-Micron Referencing and Why They Should Be Interrogated by Optical Diffraction—Part I. The Concept of a Spatial Affinity Lock-in Amplifier. Sensors, 21(2), p.469. 

Frutiger, A., Gatterdam, K., Blickenstorfer, Y., Reichmuth, A.M., Fattinger, C. and Vörös, J., 2021. Ultra Stable Molecular Sensors by Submicron Referencing and Why They Should Be Interrogated by Optical Diffraction—Part II. Experimental Demonstration. Sensors, 21(1), p.9. 

2019 Kübrich Molographic Peptide Arrays Towards Label-free Protein Signaturing in Undiluted Blood Plasma 

focus lens_filtering

The Big Con of SPR - And How to Solve it

The Big Con of SPR – and How to Solve it

Understanding the environmental noise problem encountered and the high measurement precision needed in with surface plasmon resonance (and all refractometric biosensors) requires fundamental knowledge not only of how the desired binding signal is acquired in this technique, but also why we detect changes (noise) we are not interested in measuring.

Signal Processing- The Basics

Let’s begin by thinking of what we want to achieve with a biosensing technology such as SPR. Like most biosensors, the aim is to detect specific molecular interactions in real-time. But the bigger question we must ask first is: how can we measure only this specific molecular interaction and nothing else? In other words, how can we measure the signal without measuring the environmental noise buried in real space?

To develop such a sensing concept, we need a basic understanding of signal processing. This is not an easy task, mostly because the concept used in signal processing quickly become abstract. To avoid confusion, we use an analogy for an array of molecular interactions we aim to detect: a simple image, such as the one you could get with your smartphone camera. In the language of signal processing, a smartphone image and an array of molecular interactions is the same. So, what is an image, mathematically? It is just a combination of numbers, arranged in what we call a matrix (a table, of sorts). The numbers inside this matrix specify the intensity of light in the image at specific coordinates.

matrix corresponding to picture of Zurich

In many cases, the biggest challenge when taking a photograph is to image exactly what is in front of the camera, e.g. the faces of the people in the picture; in this respect, it is very similar to detecting biosignals. The issue that both these processes face is also the same: noise. Noise is essentially a range of signals that we don’t want to acquire, because they interfere with the proper readout of the signal (in our analogy, the faces of the pictured people) we do want. It can originate from environmental influences: for instance, if you took a picture facing the sun, the faces of the people you want to picture would not appear, because the sunlight overexposes your sensor. This issue stems from problems with data acquisition, e.g. the settings you use on your camera when taking a picture. One way to fix this is to directly account for environmental noise during the acquisition of the data; however, this is more challenging than it sounds, because you don’t always know where the noise will come from and how much it will interfere with the signal of interest. Usually, it only becomes evident once the data (in our example, still the image) is already acquired. However, this remains problematic, because not all environmental noise can be filtered out once the signal is obtained. Indeed, an overexposed photograph can be filtered to enhance certain contrasts, but filters cannot entirely remove the overexposed nature of the image. This is because every pixel in the image is immersed in environmental noise and these signals fully overlap: they can no longer be fully discriminated. For this reason, it is crucial to exclude as much noise as possible before data acquisition. Filtering noise is the key to this; but how can it be achieved before signal acquisition? How is it different than using filters on a smartphone?

Filtering Signals

time and frequency domain schematicFiltering can be done in different ways, and more importantly, it can be done in different spaces: in real space, or in Fourier space. Real space describes space the way we are most familiar with it, the way we see it, in the typical three dimensions. Fourier space on the other hand isn’t a “space” the way we conceive it in our minds, but a mathematical analogy of this. It is used to convert images from the real space in terms of their frequency components. The tool that we use to perform this transformation is called the Fourier transform. What this looks like mathematically is a conversion of a complicated function (representing the raw signal or the raw image) into several simpler functions. Figure 1 depicts this concept; as you can see in this figure, the raw signal shown in the time domain (in real space) can be separated into simpler components in the so-called frequency. When graphically representing these components in the frequency domain, we obtain a very different graph. The essential take-away is the following: although the frequency domain graph looks very different, it is representing the same original signal in real space, just in a different manner.

Images in Fourier Space

Signals can be represented both in real space and in Fourier space; this applies to images as well! Below is an example of an image in real space (on the left) vs. image in Fourier space (on the right). In Fourier space, signals closest to the center are low-frequency signals, while high-frequency signals are found in the periphery.


image in real space vs. in Fourier space


Although these images look vastly different, they represent the same information in different manners. In short: there is more than one way to see an image (and biosignals)!

Why is this important and why is it useful to represent biosignals and images in Fourier space instead of real space? It all comes back to the original question we asked at the very beginning of this article: When trying to detect a specific signal, e.g. a molecular interaction, or trying to capture a specific image, how can we detect exactly what we would like to detect, and nothing else? The answer to this is to perform the measurement in Fourier space instead of real space.

But why? Although analyzing signals and images in real space is much more intuitive, it is much harder to handle the data in this space once it is acquired. As mentioned previously, data acquisition is often faulty, because we can not always optimize how we acquire our data in a way that excludes any unwanted signal. This is particularly difficult with biosignals, because we can not see environmental noise the way we do an image. This is why we must use filters before the data is acquired, unlike applying a filter to a taken image on our smartphones. However, when looking at the signal acquired in the time domain in figure 1 for example, you will notice that filtering out unwanted signal becomes a tedious task. Just by looking at this function, how can we know what parts of it we want, and what parts we don’t want? Moreover, how can we remove those parts across the entire function? Likewise, for the image, how can we filter out the noise and keep the desired information? The truth is, unless we go through the process of trial and error, it is very difficult to do this in real space. However, when looking at the same information in Fourier space, things become much simpler. Looking at figure 2 for instance, in the right image, we see a bright spot in the middle that dims as we get closer to the edge; this means that there is a lot of low-frequency signals (slow or long-ranged) and only few high-frequency signals (fast or short-ranged) in the image. It so happens that noisy signal belongs to low-frequency ranges: knowing this, we can simply filter out the signal in the center of the image, an keep everything else around it. Similarly, with the signal in figure 1, we can remove the signal comprised within the peak appearing at low frequency, and keep the peak at high frequency. Once the filtering is done, we use the inverse Fourier transform to convert the newly filtered data from the Fourier space back into real space, thus obtaining a “clean” signal that we can interpret!

Signals in Surface Plasmon Resonance

At this stage, it may be unclear how this relates to the surface plasmon resonance technology. The aim of SPR is to detect biosignals. Because the signals we want to detect with SPR are mingled with a majority of noise component (environmental noise) we are not interested in, we are faced with the problem described throughout this article: how to detect signal without noise. This is crucial for SPR in particular, as without a robust process to solve this issue, the signal we want to detect gets very easily lost in a sea of noise. In practice, this is what makes SPR so challenging; it is extremely sensitive, but this sensitivity does not discriminate between desired signal and the noise. Indeed, SPR samples its data in real space. Every data point we sample contains noise and the weak signal is diluted over all. In order to get enough of the interesting signal, we need to sample many data points. This means that we acquire monumental amounts of noise.

Still confused? An analogy to put this into perspective.

Imagine you are standing on a standard scale, one you would use to weigh yourself, with a shot glass containing a bit of water. In this analogy, you represent the environmental noise, and the water you pour into the glass represents the signal, i.e. the change you want to detect. If a few drops of water were added to the shot glass, the change in weight would most likely not be detected by the scale you are standing on, as the increment is too small. To detect this change on that scale, it would need to have an extremely high precision (i.e. be able to measure in the gram range) whilst also maintaining a large range of weights (e.g. 0-100kg). If you were to weigh the shot glass alone on a new scale measuring only in the gram-range (e.g. 0-10g) instead of the kilogram range however, the added weight of the water would be detected. Thus, the new scale can detect the changes with higher accuracy. More importantly, the absolute precision required to detect the added water, in this example, 1g, remains the same for both scales; however, when looking at the relative precision of both scales, the smaller scale would show a significant advantage. Indeed, 1g in a measuring range of 10g is much less precise than 1g in a range of 100kg… Surface plasmon resonance is analog to the big scale: it requires extremely high precision to detect signals of interest, whilst also having to sample a very large range of signals.


In a Nutshell

How can we untangle these signals from each other across all the data points we sample? Could it be possible to avoid this problem and circumvent the noise acquisition completely? By sampling our data in Fourier space instead of real space, we achieve just that. Signal and noise are nicely separated; if we place out detector at the signal location we can acquire only the signal, while we do not even see the noise. This brings a significant advantage to the table: our detector no longer requires extreme relative precision. The same way we do not need a 100kg-ranged scale with 1g precision to measure a few drops of water added to a glass, we do not need a signal detector sampling a large range with a high precision. We can simply use a cheaper scale or signal detector with a much smaller range and much smaller relative precision. SPR measures signals the hard way; how can we design a biosensor that does this the smart way? Has such a biosensor ever been made? Find out in the next article!


Frutiger, Andreas, Christof Fattinger, and János Vörös. “Ultra-Stable Molecular Sensors by Sub-Micron Referencing and Why They Should Be Interrogated by Optical Diffraction—Part I. The Concept of a Spatial Affinity Lock-in Amplifier.” Sensors 21.2 (2021): 469.

Frutiger, Andreas, et al. “Ultra Stable Molecular Sensors by Submicron Referencing and Why They Should Be Interrogated by Optical Diffraction—Part II. Experimental Demonstration.” Sensors 21.1 (2021): 9.

SPR explanation diagram

Surface Plasmon Resonance (SPR)

Surface Plasmon Resonance

The past articles have covered some of the most common examples of label-based technologies (see our previous article – Practical Examples of Biosensors); in the following we discuss one of the most popular label-free technologies on the market.

What is SPR?

Surface Plasmon Resonance (SPR) is an optical biosensing technology that allows the detection of molecular interactions. Although it is the name of the technology, it is also the name of the phenomenon behind the technology. SPR is an established technology (35 years) to detect analytes in solution with very high sensitivity.

How does it work?

The SPR phenomenon occurs when transverse magnetic (TM) polarized light strikes an interface between a metal and a dielectric material such as glass. When light travels from a medium with a higher refractive index (n1) to a medium with a lower refractive index (n2), it can be totally reflected, as shown in the figure below. This happens at any angle θ above a certain threshold angle, which can be calculated.


refractive index explanation diagram

How can we detect and quantify molecular interactions with this setup?

We start by covering a gold film with a repellent surface coating made of proteins. We then embed ligands specific to the analyte of interest (recognition molecules) within this coating (gold film). This coated gold film is placed onto a glass layer, with the coating facing upwards. The sample solution that may or may not contain the analyte of interest (mobile phase) flows over this coated surface. If the sample contains the analyte, it will bind to its specific ligands on the surface. When we shine polarized light onto this surface at a specific angle θ, a new wave (called “mode”) is excited: a ). Because energy is required for these waves to form, a part of the energy normally redirected into the reflected light wave is used to create these waves. At this specific angle θ, where the surface plasmon is formed, all the light energy goes into forming the SP mode; the light is virtually entirely extinguished.


SPR explanation diagram

When preparing the surface plasmon resonance biosensor, the specific angle θ where this effect occurs is established. It is crucial for this angle to be measured, because whenever a molecule binds to the recognition molecules in the surface coating on the gold film, the angle at which the SP mode is formed slightly changes. Indeed, when a molecule binds to the recognition molecules in the surface coating, the refractive index close to the surface is altered, which in turn alters the angle at which the plasmon is launched.

By letting the sample of interest flow over the coated gold film, we can  measure the angle dependent intensity of the reflected light beam with a photodetector array, as shown in the figure below. This angle-dependent light intensity is proportional to the refractive index close to the surface, which is itself proportional to the  amount of analyte binding: by using the angle shift in the reflected light beam, we can indirectly quantify the amount of analyte binding (or not) to the surface of the biosensor and plot a binding curve, called a sensogram (see figure 3).


SPR and sensogram

For an animated explanation of SPR, visit

Hungry for more?

Are you wondering how sensograms differ between samples with and without analyte? Or analytes with high vs. low affinity? Head over to Surface plasmon resonance | Cytiva, formerly GE Healthcare Life Sciences for animated depictions of how to interpret a sensogram in various conditions.

Advantages & Disadvantages of SPR

SPR offers many advantages compared to other biosensing technologies, primarily because of the sensitive, real-time, and quantitative information it yields. Indeed, SPR has very high sensitivity in detecting the presence of analyte via binding/non-binding events (up to a picogram per mm2 of molecules can be detected! This is equivalent to 1/300 of a monolayer of water molecules on a surface. The real-time nature of the measurement enables the measuring of the affinity and the kinetics of the studied molecular interaction, which is crucial in drug development. This information is provided by the sensogram, a type of graph that is made in real-time as the binding reactions occur (see example below). Real-time measurements are very attractive in biological research settings; even more so when they can be done label-free (cf. article “What is a biosensor”), as is the case with SPR. Finally, due to the extremely high sensitivity of SPR, measurements can be done with very small sample sizes (a few microliters are sufficient).

In spite of its many advantages, SPR does have its caveats. The main issue is that SPR only measures the refractive index change on the sensor surface; it is a so-called “integrative sensor”. Imagine a standard scale you would use to weigh yourself: if you put apples and oranges on it, you will not be able to discriminate the weight of the apples from that of the oranges, or be able to count how many of each you have just based on the number given by the scale, as all you get is the total weight. Temperature, medium changes as well as nonspecific binding (i.e. molecules that bind the surface but that we are not interested in detecting) change the refractive index on the surface as well; similarly to the scale problem, we don’t know what exactly we are reading out. SPR is heavily affected by these external influences and requires stabilization. Moreover, it requires long equilibration times, buffers cannot be readily switched during a measurement, and one can only measure molecular interactions in well-defined buffers. In real biological samples, there are a myriad of background molecules in the solution that also bind to the sensor surface (nonspecific binding).

Surely, these non-specific interactions will occur with lower affinity than the specific interactions with analyte of interest. However, in most cases, there are many more non-specific molecules in the testing sample than specific ones (often a million to a billion times more). The non-specific binding therefore completely obscures the binding signal and hampers the limit of detection. The cross-sensitivity problem of SPR towards medium changes, temperature and non-specific binding can be summarized as the environmental noise problem of SPR. Finally, besides SPR’s susceptibility to environmental noise, the angle shifts detected by SPR when molecular binding occurs are minuscule. One therefore needs to measure the angle with a relative precision of 10-5, a precision equivalent to being able to see the Eiffel tower from China… This is only achievable with expensive scientific cameras and equipment.

Did you know? 

Although SPR has proven to be a very attractive technology to analyze molecular interactions, its limitations are vast in the context of crude biological samples and non-stabilized measurements. Furthermore, the angle shift of the SPR resonance needs to be measured extremely precisely.

Why is this the case? Why are SPR measurements so challenging? Could it be possible to create a technology that encompasses all the advantages of SPR (label-free, real-time, sensitive, and quantitative), but that could also be used in non-purified samples and better filter out environmental and experimental noise? Stay tuned – The answer to this question will be discussed in our next blog article.


Tang, Y., Zeng, X. and Liang, J., 2010. Surface plasmon resonance: an introduction to a surface spectroscopy technique. Journal of chemical education, 87(7), pp.742-746.

Surface plasmon resonance | Cytiva, formerly GE Healthcare Life Sciences


Katharina Kübrich, 2019. Molographic Peptide Arrays: Towards Label-Free Protein Signaturing in Undiluted Blood Plasma. ETH Zürich Master’s Thesis, p. 12.

Tang, Y., Zeng, X. and Liang, J., 2010. Surface plasmon resonance: an introduction to a surface spectroscopy technique. Journal of chemical education, 87(7), pp.742-746.

Damborský, P., Švitel, J. and Katrlík, J., 2016. Optical biosensors. Essays in biochemistry, 60(1), pp.91-100


Heterogeneous vs. Homogeneous Assays

Heterogeneous vs. Homogeneous Assays

After giving examples for practical examples of Biosensors, todays articles should classify differences between heterogeneous and homogeneous assays. To properly understand the differences between both assays, a few definitions need to be clarified: 

Assay: testing procedure for estimating the concentration of a pharmaceutically active substance in a formulated product or bulk material (1).  

Antibody: specialized Y-shaped proteins produced by our immune system that bind like a lock-and-key to the body’s foreign invaders — whether they are viruses, bacteria, fungi or parasites (3). When antibodies bind their specific targets, our immune cells recognize the invader covered in antibodies more efficiently, and can then eliminate it. 

Antigen: any substance, living or not, that triggers an immune response. 

What is the difference? 

Both of these assay classes most commonly regroup what we call immunoassays, “bioanalytical methods that measure the presence or concentration of analytes in a solution through the use of an antibody or an antigen as a biorecognition agent.”(1) Homogeneous assays can yield accurate read-outs without having to separate the analyte of interest from the biomolecules (e.g. labeled antibodies) used to detect it. On the other hand, heterogeneous assays require one or more steps of separation, where unbound antibodies and/or unbound analyte must be washed away. This makes heterogeneous assays often longer and more complex to conduct; however, they are often more precise than homogeneous assays, and can be very helpful to detect more complex analytes. Homogeneous assays are usually reserved for the detection of small, simple molecules. 

EMIT- A Homogeneous Assay 

Enzyme Multiplied Immunoassay Technique, otherwise known as EMIT, is an assay used to detect the presence or the amount of analyte in a solution. As its name implies, it uses enzymes to yield this result. The main component of the assay is a complex made of an enzyme (“reporter enzyme” in figure below), attached to the analyte of interest. The enzyme’s active site (the part that is specific for binding) is specific for a substrate that is fluorescent after its binding event with the enzyme. In short, when this substrate and the enzyme-analyte complex bind, the substrate changes color (yielding a detectable product).  These complexes are added to a sample solution, usually issued from a patient; when the color change of the substrate occurs, the whole sample solution will change color.  This however still does not help us explain how we can determine the presence of the analyte of interest in the sample. To this end, we add an antibody specific for the analyte of interest to the sample (anti-analyte antibody). Here, there are two scenarios: either the sample contains the analyte of interest, or it does not.

If the sample does not contain the analyte of interest, the antibody will only be able to bind it whilst attached to the enzyme (in the enzyme-analyte complex), because there is no free analyte in the present in the sample solution. When this happens, the antibody blocks the active site of the enzyme. This means that no other molecule can bind this site, because there is no access possible. The color-substrate that would usually bind there and become fluorescent can therefore no longer bind; it never becomes fluorescent and there is no color change in the sample. If, however, the solution does contain the analyte of interest, the added antibodies have many more molecules to bind: they will primarily bind the free analyte in solution (because of its easy access), as well as some of the enzyme-analyte complexes. Nevertheless, some of these complexes will remain unbound by the antibody; there the color-substrate can bind and become fluorescent; the sample then changes color as well. 

Mechanism of EMIT

For a more animated explanation, go watch the video at : EMIT immunoassay    

ELISA- A Heterogeneous Assay 

ELISA stands for Enzyme Linked Immunosorbent Assay; like its homogeneous counterpart EMIT, it is a heterogeneous assay that can determine the presence or the quantity of an analyte. 

There are four types of ELISA assays: indirect, sandwich, and competitive ELISA (see figure below). All three involve an analyte, at least one antibody, and an enzyme that changes color when a desired reaction occurs. What makes all types of ELISA heterogeneous is the washing steps (also depicted), that eliminate any non-bound molecules, be it the analyte of interest or their specific antibody. 

One of the most commonly used ELISA is the indirect assay. Typically, it is used to detect the presence of an antigen, for example a virus. The test begins by attaching the antigen to the bottom of a plate. After washing the plate to remove any unbound antigen, we add an antibody specific for this antigen (primary antibody). After another wash step to remove any unbound antibodies, we add another antibody (secondary antibody), this one specific for the first antibody. The secondary antibody is usually issued from an animal (e.g. rabbit), so that it recognizes the human antibody administered first. It is also conjugated to an enzyme that will react with a substrate that we administer in the final step of the assay. Before we add this substrate, we wash the full sample one more time to remove any unbound secondary antibody. Finally, we add the substrate: when reacting with the enzyme, a color change is induced, and the solution changes color. The color change indicates the presence of the analyte of interest, and the color change is proportional to the amount of primary antibody present, and thereby also to the amount of analyte.

Comparison between various ELISA assays

As shown in the figure on the left, there are other types of ELISA; each provides various advantages and disadvantages, displayed in the table on the right. 

Did you know? Immunoassays and SARS-CoV-2 

One of the biggest disadvantages of ELISA in the past was its low throughput readout. Roche has simultaneously solved this problem and created a fast detection method of SARS-Cov-2  based on their general Elecsys assay technique. It is “an immunoassay for the in vitro qualitative detection of antibodies (including IgG) to SARS-CoV-2 in human serum and plasma”. It utilizes a principle very similar to ELISA called ECLIA, pictured below. 

ECLIA mechanism


  1. Biological Assays,   Introduction to Biological Assays 
  2. LiveScienceWhat are antibodies? 
  3. Ju, H., Lai, G. and Yan, F., 2017. Immunosensing for detection of protein biomarkers. Elsevier. 
  4. SlideShare, Homogeneous and heterogeneous immunoassay 
  5. Engvall, E., 1980. [28] Enzyme immunoassay ELISA and EMIT. In Methods in enzymology (Vol. 70, pp. 419-439). Academic Press. 
  6. AbcamELISA: basic principles and types of ELISA assay  
  7. Elecsys® Anti-SARS-CoV-2 


  1. Pharmaceutical Analysis, Chapter 24 – Page 8 
  2. Boguszewska, K., Szewczuk, M., Urbaniak, S. and Karwowski, B.T., 2019. immunoassays in DNA damage and instability detection. Cellular and Molecular Life Sciences, pp.1-16. 
  3. Elecsys® Anti-SARS-CoV-2 
  4. Chiu, M.L., Lai, D. and Monbouquette, H.G., 2011. An influenza hemagglutinin A peptide assay based on the enzyme-multiplied immunoassay technique. Journal of Immunoassay and Immunochemistry32(1), pp.1-17.

Glucosemeter and User

Practical Examples of Biosensors

The definition and classification of biosensors were discussed in our previous blog post “What is a Biosensor?”. But what exactly is a biosensor used for? Read more about the following two examples of label biosensors. 

Label-based Biosensor: The Pregnancy Test 

Perhaps the most famous example of a label-based biosensor is the pregnancy test. It is a particular implementation of a so-called lateral flow assay - LFA. It is a masterpiece of science due to its simplicity and accuracy (reportedly 99%!), arguably unparalleled by any other biosensor. Its success is mainly based on that it reports only a yes-no answer (pregnant vs non-pregnant) and no exact quantification is required. 

The pregnancy test or any lateral flow assay is composed of three areas, each of which with its own function. When a pregnant woman’s urine is applied to the test, it first travels to the reaction zone via capillary forces: there, a hormone produced by pregnant women only, hCG (human chorionic gonadotropin), will bind to antibodies specific to that hormone. These antibodies have an enzyme (horseradish peroxidase, HRP) attached to them. Once hCG binds to the antibodies in the reaction zone, the urine sample with the hCG-antibody complex continues to travel along the test strip to the test zone.

There, the hCG-antibody complex binds another antibody also specific for hCG. This antibody has a coloring agent attached to it; typically, it is a gold nanoparticle (colloidal gold is typically reddish in color)At this stage, after both antibodies have bound hCG, the HRP enzyme from the first antibody activates the coloring agent, amplifying the signal enzymatically and causing a color change, which is the first line that appears on the pregnancy test. The last zone through which the sample flows through is the control zone. There, any free (unbound) antibodies are bound by a third antibody with a dye molecule. When binding occurs, the second strip appears, thus confirming the positive result and that the test wasn’t faulty.

Go to How do pregnancy tests work? - Tien Nguyen for an animated explanation of the pregnancy test. 

Did you know? Gold nanoparticles, used as a coloring agent in the pregnancy test, are not a modern invention. In fact, gold nanoparticles were used already during medieval times: to color glass windows in churches. Trapped gold nanoparticles in a glass matrix create a ruby color.

The Glucose Meter 

As the name suggests, the glucose monitor is a biosensor aimed at monitoring glucose levels in the blood, or more specifically, measuring the amount of glucose in a person’s blood at a given time point.  

The glucose meter senses glucose (the compound of interest, or analyte) by utilizing an enzyme that naturally reacts to it: glucose oxidase. When a drop of blood is administered on a glucose strip, the glucose in that blood sample reacts with the glucose oxidase enzymes contained in that strip. The strip itself is pushed into the insertion in the glucose meter, where it is in contact with an interface with an electrode. As the biochemical reaction between the glucose and the glucose oxidase occurs, an electric current is generated (by the electrons formed during the reaction, cf. figure), which is sensed by the electrode. The glucose meter then shows a number on its display, which corresponds to the strength of the electric current sensed by the electrode. 


Did you know? The glucose monitor, otherwise referred to as the “glucometer”, was the first biosensor ever invented. Its technology is attributed to Leland Clark who began working on the oxygen electrode in 1956 and Anton H. Clemens who later developed the device. 


  1. MIT School of Engineering, » How do glucometers work? 
  2. General Introduction and Application in Blood- Glucose Level Monitoring -  ppt video online download, 
  3. Koczula, K.M. and Gallotta, A., 2016. Lateral flow assays. Essays in biochemistry, 60(1), pp.111-120. 
  4. Bahadır, E.B. and Sezgintürk, M.K., 2016. Lateral flow assays: Principles, designs and labels. TrAC Trends in Analytical Chemistry, 82, pp.286-306. 

What is a Biosensor?

Biosensor Definition

Although the term “biosensor” may seem quite clear on first glance, its meaning is less straightforward. A linguist might say a biosensor is a device that “senses life”, as its etymology would suggest. A biologist on the other hand would argue that the term “life” is too broad, while a physicist would likely be confused by the meaning of “sensing”. The question then is: What is a biosensor?



A Definition for All

One of many official definitions states that a biosensor is “a device that uses specific biochemical reactions mediated by isolated enzymes, immunosystems, tissues, organelles or whole cells to detect chemical compounds” (IUPAC definition). Put simply, this means that a biosensor utilizes molecular interactions from or based on those in living systems to detect a compound of interest. 

There is a myriad of ways to accomplish this, which is why finding a definition that suits all possible biosensors is a challenging task. There are however three basic elements all biosensors have: 

  • a bioreceptor: any biological compound capable of detecting the compound of interest (aka “analyte”, cf. figure). Typical examples include biomolecules such as enzymes, antibodies, but also living organisms such as cells.
  • a signal processor: the part of the biosensor that will convert the physicochemical signal from the receptor to a quantifiable signal we can interpret.
  • a transducer: the part of the biosensor that will link the bioreceptor detecting the compound to the signal processor.


Label vs. Label-Free Biosensors

Biosensors can be classified according to various parameters, such as what type of bioreceptor they use, what transduction type, or what signal processing mechanism they use. Although these classifications can be very useful, they don’t always provide the most practical information. One broader way to categorize biosensors is by separating them into two big classes.

Label biosensors detect the presence and/or activity of molecules of interest thanks to a special tag, the label, that is attached to them. A typical example of such a sensing technology is ELISA (enzyme-linked immunosorbent assay). On the other hand, label-free biosensors detect the presence and/or activity of molecules of interest based on their biophysical properties, such as molecular weight, refractive index (e.g. surface plasmon resonance) or charge. Other examples of label-free technologies include the microcantilever, quartz crystal microbalance and mass spectroscopy. 

What is a label? 

Think of a label as a type of tag for molecules. When you go shopping, each clothing item has a tag with a barcode; this is necessary because a store usually has several copies of the same item. Tags with barcodes allow differentiation of identical items amongst each other, which thereby allows stores to keep track of the number of items they have and the location of each item. Labels in biosensors have a similar function: they help us follow or detect a molecule of interest, so we can distinguish it from the others. In biosensors, the most common way to label a molecule is  by attaching a color tag, most often in the form of fluorescent molecules.  

Beyond simple molecule distinction, labels also enable molecule detection in a simple, visible manner. Indeed, labels provide signal amplification. If the label is attached to an enzyme specific for a molecule of interest, this enzyme can create multiple copies of the color tag every time it binds its specific target molecule. In short, this means one specific binding event will lead to multiple copies of the color tag molecule, which makes it much easier to detect the analyte of interest. 


Which is better?

Using a label is a seemingly smart and practical way to keep track of and detect the molecules we are interested in; however, in biosensing, this isn’t necessarily the case. Indeed, adding labels can cause problems in accurate and precise detection. Firstly, the labels we use are molecules themselves, and these tend to be quite chunky molecules. This means that by attaching them to the molecules we are interested in, we create a much larger and different molecule: this alters the intrinsic properties of the molecule of interest, thereby affecting its transport, activity, and sometimes also its effect. In short, by adding a label, we may not be measuring what we want to measure… Furthermore, adding labels always involves an additional and non-straightforward step in the preparation/fabrication of the biosensor and requires isolation and purification of the molecules of interest. 

This is why label-free technologies are often the preferred strategy for biosensing where one is interested in the thermodynamic properties rather than the identity of the molecule or the concentration of the molecule.  


What concrete examples of label and label-free biosensors are there? How do these biosensors work, and what are their functions? Stay tuned and read more about these questions in the next blog article.