On Women in Science

Forty years ago, women were grossly under represented in Sciences, be it medicine or research. Since then, dramatic gains have been made – roughly 50% of the MD degrees or PhDs in life sciences are awarded to women. And yet, women still lag behind men in full professorships and tenure track positions, especially in math intensive field. A simple explanation for this disparity could be discrimination as has been asserted by various studies in recent times.

In a  study published in PNAS, authors Ceci and Williams  looked for evidence for such discrimination in the areas of publishing, grant review or hiring. They thoroughly analyzed and discussed twenty years worth of data and showed lack of evidence for overt discrimination in these areas. When it came to publishing, they looked at manuscripts acceptance rate and found that women are as likely to publish as men  provided the comparison is made between men and women with similar resources and characteristics. Grant reviewing process seems to be gender blind. Discrimination at hiring level has also decreased since 2000.

This indicates that reasons other than overt discrimination could be contributing to the problem. For instance, women are more likely to occupy positions with limited access to resources – be it teaching intensive positions or part time positions – and hence are less likely to publish. Or women are more disproportionately affected due to fertility. Indeed, the authors point to three major contributors to  this under representation of women in the math intensive fields: family choices, career preference and ability differences, and suggest that different questions be asked to understand the reason behind the leak in academic pipeline.

Findings from Postdoctoral survey done by the Second Task Force on the Status of NIH intramural Women Scientists actually highlight these very points. Around 1300 intramural postdoctoral fellows at the NIH (43% women and 57% men) took part in this survey. More than 2/3rds of the men but only 1/2 the women considered a PI position as a career choice. Women rated children, spending time with children or with other family members as “extremely important” or “very important” to their decision making while men considered them less important.  57% of married female postdocs who had no kids and more than 36% of single women (compared to 29% and 21% men, respectively) rated children as a “very important” or “extremely important” factor influencing their career choices.  When considering traveling or demanding schedule of PI, married women were more likely to be concerned compared to single women. However marital status of men had no influence on this factor. Breakdown on partner status indicated that women were more likely to have spouses that worked 40hrs or more per week and less likely to have have a partner who does not work outside the home. This bias is reflected in childcare habits -43% men reported a spouse or relative cared for the kids during the day in contrast to 16% of the women.  In case of dual career couples, women were likely to make changes to accommodate their husband’s job than men (34% vs 21 %) and furthermore, men were likely to expect this concession from their spouse.

These data clearly reflect that woman’s choices are hampered by biological and societal constraints.  Simple measures like providing on site child care, creating funding opportunities for research re-entry positions after a partial or full absence for family reasons, instituting family friendly policies that allow for a lull in productivity due to fertility etc would go a long way in making academia a viable and satisfying career for women.

Posted in General, policy | 2 Comments

Link Love

Interesting reading explaining a new paper that reports lateral gene transfer of Human DNA to Neisseria genome.

Ed Yong: Gonorrhea has picked up Human DNA (and that’s just the beginning)

Mark Pallen: Human DNA in  genomes? Yes? No? Maybe?

LA Times: Scientists find Human DNA in gonorrhea bacteria.

There is a good chance that this could be contamination as suggested by Neill et. al? Hanah Waters explains the finding.

Posted in Links | 1 Comment

Thoughts on AGBT 2011 (Part 2)

[Part 2 of my summary of the AGBT 2011 meeting. Mainly about the various sequencing technologies.

Also featuring: I get to anoint the most partying corporation.

Part 1 here]

Technology Development:

In contrast to last year’s two new exciting sequencing technology announcements, PacBio’s SMRT single molecule sequencing and Ion Torrent’s Personal Genome Machine (PGM), nothing groundbreaking was presented.

The closest that came to a potentially novel and readily utilizable technology was the BioNanomatrix system. The platform consists of a dense array of nanochannels (~100nm thick) and very long stretches of genomic DNA can be entropically driven through these channels in a linear manner. The flowing DNA can be either imaged with a camera (if fluorescently labeled) or electrically detected.  This movie demonstrates the concept for a 400kb DNA. I am not sure how the platform can be used for de novo sequencing, but it could certainly be useful for investigating DNA structural variations or particularly for epigenetic mapping of long DNA stretches. One thing we did wonder about was how much this technology differs from the US Genomics approach of DNA imaged in microfluidic channels.

There were also lots of posters, and quite a few talks on sample preparation for various NGS platforms, both from the point of view of speed, as well as quality. This is one field I am not very familiar with and hence, little to write about In fact, I had never realized how important the role sample preparation played till I heard talks about genome assembly talks!

I did finally have the opportunity to see in person both the Ion Torrent PGM and the recently announced Illumina MiSeq machines. As I am not an end-user, it was the technology in each that was the cool factor. Both machines are pretty much plug-and-play units: add your DNA sample at one spot and out comes the result few hours later (real-time on your iPhone app if you want). As an added bonus, you can even charge your iPhone on a dock on the Ion Torrent PGM, making it the most expensive (~$60K) charger on market.

In case of the PGM, the sample preparation is an 8-hours process involving emulsion-PCR. MiSeq, on the other hand, uses a 20 minute step before sample loading. MiSeq also has the advantage of being similar to Illumina’s HiSeq – and hence can take advantage of similar chemistry, and thus, reagents. Thus, while the MiSeq seems like a scaled down Solexa technology, Ion Torrent uses a novel approach where the pH change occurring due to the proton release from nucleotide addition is detected. Consequently, the PGM is camera-free and their chip, where the sequencing reaction occurs, takes advantage of the semiconductor manufacturing industry. Hence their throughput is roughly expected to double every six months.

Of course, there are important differences between the two in terms of read lengths and coverage. PGM can sequence ~50-100bp (>99.5% accuracy for 50bp reads) but 10Mb coverage vs 35bp reads and >120 Mb for the MiSeq (the Torrent data was information I got from a company representative, and the MiSeq from their website).

While the PGM is now available for sale, it seemed to me – and feel free to correct me here – that people are still not sure on where and how to utilize the platform. Big sequencing centers like The Broad Institute (Chad Nasbaum from the institute presented posters and talks on validating both the PGM and PacBio) seem to be planning on using the instrument for QC of library preparations and validation of certain sequencing results. In terms of actual sequencing, the most likely applications could be for microbial genomes or targeted genes. I am assuming that various academic PIs are writing RO1 grants that include $60,000 for instrument purchase as we speak. Ion Torrent is also crowd-sourcing for developing methods that will enable improved read lengths, lower read times and even faster sample preparation times.

The other major player in current third generation sequencing systems is Pacific Biosciences’ SMRT (single-molecule, real time) sequencer. Though their machines are not available commercially, quite a bit of data was presented by early users and from their own research. I was not able to attend their lunch workshop but did receive some feedback from other attendees. The main issue with the platform currently seems to be the low accuracy (86% as opposed to >99% on all other current instruments). Apparently their own people admitted to as much, indicating that the PacBio data needs to be combined with other 2nd generation sequencing systems (454, SOLiD etc) for higher accuracy. However, they do posses some advantages – they can cover long stretches of DNA (>1000bp) and single molecule sequencing is extremely rapid. The quick sequencing of the Haiti cholera genome was provided as an example [4], and I have already written about the disease weathermap project.

Additionally, Jonas Korlach from PacBio presented data on detection of methylated bases and other DNA damages. It seems that the polymerases used in the sequencing pauses when it reaches modified bases and this longer base addition interval can be detected. By using further base and polymerase modifications, they have been able to elongate this interval further, thereby allowing for a more accurate detection of these bases. This could prove very useful for epigenetic or DNA damage studies on unamplified DNA.

The most exciting new sequencing method on the horizon and one that has potential to be a disruptive technology is nanopore sequencing (disclosure: I work for a company that is performing research in this area). The basic idea of the technology is to force the DNA through a small pore (on few nanometers scale) using a voltage bias (much like electrophoresis) and then reading the differential current blocks produced by each base as it passes through the pore. The advantages of these system include possible long reads, no requirement for amplification (therefore useful for detecting epigenetic modifications), and simpler electronic detection that requires no cameras. The lack of labeling and the electronic detection makes the system potentially cheap as well.

Unfortunately, Hagan Bayley of Oxford University, one of the leading researchers on nanopore, and whose technology is being developed through Oxford Nanopore, was unable to attend the meeting. This resulted in some shuffling of talks, with his collaborator Mark Akeson – supposed to speak earlier during a technology session – taking Bayley’s place in the last session. Jens Gundlach of Washington University stepped in to provide an additional presentation. Both Gundlach and Bayeley (and Akeson’s) group use protein nanopores for sequencing, but they use different proteins, each with certain advantages. Since Gundlach did not approve of blogging/tweeting of his talk (“I will be the only one tweeting here” was his comment), I cannot write about it. However, his group did publish a nice paper last year, where he describes some success with his sequencing method, albeit using a preliminary amplification step [1].

All of Akeson’s work presented at the meeting has been published last year as well [2, 3]. He has been successful in tackling one of the major issues of nanopore sequencing, i.e. arresting the rapid rate at which DNA passes through the pore thereby allowing resolution of each base. His group achieved this by ratcheting the DNA through the pore using a special polymerase. Akeson did also mention – in response to a question from the session chair Eric Green – that short sequencing with this method is only ‘months’ away!

As opposed to the protein channels used by these groups, NABSys utilizes solid-state nanopores. The major disadvantage of solid-state nanopores versus protein is that their sizes, and therefore current block when the DNA is flowing through, is variable (additionally, protein nanopores can be mutated and/or chemically modified to provide greater versatility). On the other hand, since solid-state pores are not biological entities, they tend to be more stable and are easier to transport and include in a machine. Also, Jon Oliver, presenting for the company, stated that they were having some luck in producing more uniformly sized solid-state nanopores, or at any rate, differently sized pores that could be calibrated accordingly.

In addition to using solid-state pores, NABSys’ approach differs in the sense that instead of performing direct strand sequencing (as Oxford and collaborators are attempting), they plan to detect distances between hybridized regions of very long stretches of DNA. Data was sketchy, but the company is performing a lot of algorithm development that will allow contigs of extremely varied sizes to be read and assembled by this technique.

I was also hoping to see the new GRIDIon platform from Oxford Nanopore in action. This is a generalized nodal platform for DNA sequencing or nanopore detection, announced just before the conference. Videos of the system look pretty cool, but unfortunately, there weren’t any demonstrations at the AGBT (there some reviews out there by Luke Jostins, Dan MacArthur and Kevin Davies)

(Though not directly AGBT related, it is worth in this context to read NHGRI’s very good summary of various nanopore sequencing approaches including some I have described here.)

A different technology that seems much more ready for prime-time is Life Technology’s quantum-dot based platform, codenamed ‘Star-light’. Life Tech’s Joe Beechem only had a poster about the method but I have heard him talk on this topic before at a University of California, San Diego seminar. Personally, I find this the most exciting new sequencing technology out there (other than nanopores, of course!). This is probably because it intersects my previous interests in fluorescence detection at single molecule level with my current sequencing research. Like PacBio’s SMRT, this system too performs real-time single molecule sequencing during polymerase synthesis, but works on a different principle.

The heart of the method is a quantum-dot (QD) conjugated polymerase, what they call a ‘nanosequencing engine’. Quantum dots are highly stable and bright fluorescent semiconductor particles on the nanometer scale with all kinds of desirable optical properties (stability, broad excitation range etc). The DNA strand with the QD-polymerase is attached to a glass coverslip though a universal adapter and imaged in a solution with the four nucleotides, each with a unique fluorescent groups emitting at a unique wavelength.  The sequencing is by synthesis: As a nucleotide is added, the QD – excited by a laser light – transfers its exited state energy to the fluorophore on the nucleotide by a process called Forster Resonance Energy Transfer (FRET). Consequently, the fluorescence from the QD goes down (said to be quenched) and at the same time the fluorescence of the acceptors jumps up from the zero baseline (these fluorescence emission are well separated on the spectrum to be detected individually).

Due the nature of the FRET process which happens only at very short (2-10nm) distances, this decrease in QD fluorescence and increase in nucleotide fluorescence occurs only when they are close together. Therefore, background interference from other fluorescent nucleotides is absent. Since each nucleotide carries a different colored fluorophore, the bases can be called in real-time based on two simultaneous observations: QD emission quenching and observation of an acceptor emission corresponding to the particular base. After incorporation, the fluorescent moiety is cleaved off by the polymerase and the QD is un-quenched, ready for the next nucleotide. A major advantage of this system over the PacBio technology is that if the QD-polymerase conjugates fail after a while, it can be washed away and replaced with fresh conjugates. Additionally, the growing DNA strand can be denatured for re-sequencing and thereby reduce errors.

The neatest application for this technology is where a very long DNA (even in the range of 40-50kb) can be placed horizontally on the cover-slide, with several of the QD-nanosequencing engines performing parallel sequencing reaction on the strand!

There was no word on when Life Technology plans to launch an actual machine based on this concept, or what the price point, accuracy etc will be. As I have written before, Life Technology’s acquisition of Ion Torrent and its current ownership of the SOLiD technology makes it an interesting question on where they can fit in this quantum-dot based technology.

In summary, while no new sequencing technologies were released this year, there are a few exciting ones in the pipeline that could see light of day in the next few years. I should also point out that have not covered sequencing solution providers like Complete Genomics, and recently, Perkin-Elmer. Complete did have the one major announcement at this conference: the release of 60 fully sequenced human genomes for open-source use by researchers.

Other General Meeting Notes:

AGBT does not have the traditional vendor show. Instead, sponsors are allotted suites or small conference rooms in the hotel where they can display their products and services. It also provides them an opportunity to give away free stuff and host parties. I was fairly restrained (or so I thought), but it almost required an extra baggage on the trip home.

Speaking of parties, Thursday evening on the printed meeting agenda was designated as ‘Dinner on your Own’. This seems to be a euphemism for ‘finding the right party’. I found myself having dinner at the PacBio ‘Dinner and Movie’ event (will write later about the documentary they showcased at the dinner) and then at various parties hosted by Life Technology, Agilent and Caliper. Life Technology possibly had the best location, the hotel rooftop with views out to the ocean; and they served shrimp cocktails as h’dourves to boot. But their insistence on serving something called the “Torrentini – Passion for Life”(vodka+passion fruit pucker, get it?) was slightly off-putting to a martini purist like myself.

I am not sure if Caliper’s ‘All Night Long’ party lived up to its name, but the aged Glenmoraigne and the better than average beer selection was certainly welcome. However, when it comes to hospitality, one has to hand it to Agilent. They were possibly the most active in giving away swags and pretty much insisting that we don’t pass by without a drink at their suite every evening at the conference.

On this note, thanks to all the sponsors for the meals and drinks, and most importantly, also for enabling all the great science to be presented. And huge appreciation for the organizing committee too for pulling off the logistics of hosting 800 odd people at a single site; it was quite neat how they used big screens to project talks all through the humongous conference room such that everyone could attend the day sessions simultaneously.

———————————————————

References:

[1]        I. M. Derrington, T. Z. Butler, M. D. Collins, E. Manrao, M. Pavlenok, M. Niederweis, and J. H. Gundlach, “Nanopore DNA sequencing with MspA,” Proc Natl Acad Sci U S A, vol. 107, pp. 16060-5, Sep 14.

[2]        K. R. Lieberman, G. M. Cherf, M. J. Doody, F. Olasagasti, Y. Kolodji, and M. Akeson, “Processive replication of single DNA molecules in a nanopore catalyzed by phi29 DNA polymerase,” J Am Chem Soc, vol. 132, pp. 17961-72, Dec 22.

[3]        F. Olasagasti, K. R. Lieberman, S. Benner, G. M. Cherf, J. M. Dahl, D. W. Deamer, and M. Akeson, “Replication of individual DNA molecules under electronic control using a protein nanopore,” Nat Nanotechnol, vol. 5, pp. 798-806, Nov.

[4]        C. S. Chin, J. Sorenson, J. B. Harris, W. P. Robins, R. C. Charles, R. R. Jean-Charles, J. Bullard, D. R. Webster, A. Kasarskis, P. Peluso, E. E. Paxinos, Y. Yamaichi, S. B. Calderwood, J. J. Mekalanos, E. E. Schadt, and M. K. Waldor, “The origin of the Haitian cholera outbreak strain,” N Engl J Med, vol. 364, pp. 33-42, Jan 6.

Posted in Conference and Symposiums, New techniques, Next Generation Sequencing | Tagged , , , , , , , , , , , , , | 3 Comments

Thoughts on AGBT 2011 (Part 1)

[This is a somewhat long summary of my thoughts from the recently concluded 12th annual Advances in Genome Biology and Technology (AGBT) meeting at the Marco Island Resort. Do note that while I attended the meeting representing the company I work for, all the opinions expressed here are solely my own and do not represent those of my employer or any of our funding agencies. Apologies in advance to those in genomics for the occasional naivete about the field.]

If not the biggest, AGBT certainly seems to be the most popular gathering for genomics research.  The fact that they sold out within 36 hours after registrations opened bears testimony to this. The beach-front location, with its promise of sunny southern-west Florida weather does help (especially this year when half the country was under gazillion inches of snow), as does the wonderful wining and dining opportunities. But in recent years, the AGBT has also gained reputation as the site for major announcements by genomics companies; both Pacific Biosciences (PacBio) and Ion Torrent released their potentially paradigm-shifting sequencers during the 2010 meeting.

As someone who has only recently entered the field of genomics, that too from the technology aspect of DNA sequencing, my goal as a first time attendee was simply to get a good perspective of the field. This was easily achieved, thanks mainly to the wonderful diversity of the scheduled scientific talks as well as the attendees. In addition to academic and corporate scientists, the meeting was well attended by various non-scientific professionals including people from investment and venture capital firms, science writers, computer network specialists etc. The structure of the conference allowed conversing at length with this variety of people during meals or over drinks during the social mixers resulting in a very gratifying and enriching experience.

The only fly in the ointment was perhaps the slightly unfair blogging/tweeting policy. Though the organizers should be commended for having a clear-cut policy for sharing information presented at the meeting over the internet, they decided it would be an opt-in rather than opt-out. So even if someone was okay with their lecture being tweeted, unless this fact was announced explicitly, one had to maintain twitter and blog silence. Adding to this, it appeared that many speakers did not fully understand twitter and blogging and ended up refusing permission even when they presented fully published data! Interestingly, more and more speakers were okay with tweeting/blogging as the conference progressed, indicating that they were possibly better informed by then.

As such in this post, I will try to avoid synopsis of every talk I attended, and provide  a general summary instead.

(If you are interested in more details, Anthony Fejes has done an extraordinary job of compiling on-the-spot notes on talks that did allow dissemination. Additionally, the #AGBT hashtag on Twitter has may real-time observations on the scientific sessions).

Overall impressions:

While the meeting was about ‘advanced genomics’, currently the field is dominated by next-generation sequencing (NGS) technologies i.e. post-Sanger sequencing methods and its applications. If you’ve been keeping up with scientific literature, you will know that the cost and speed of NGS has been falling remarkably fast, in fact, at a rate faster than Moore’s Law. This ever-reducing cost, and the emerging availability of both turn-key machines and centralized rapid sequencing centers (e.g Complete Genomics, and recently, Perkin-Elmer), has made NGS readily available for various scientific and clinical use. A consequence of this so-called ‘democratization of sequencing ’is that a plethora of applications are developing around it, from cancer genomics to microbiome sequencing, to pathogen detection.

I will get back to some of these applications briefly. For the moment, looking at a broader perspective, consensus seems to be that the future challenge (and potential bottlenecks) in genomics will be two folds – data, and meaning of data.

The problem with data is that there are huge amounts of it.  We are talking at the levels of tera and petabytes (e.g NHGRI is currently storing about 400TB of data).  The word ‘data deluge’was thrown around more than once over the course of three days. The NGS machines produce enormous amounts of data stream (an Illumina HiSeq machine could generate terabases per week) and both storing and transferring them is an issue. Large genome centers seem to be doing well here,by virtue of possessing enough square footage and in-house capabilities for software development and analysis. For smaller centers and individual labs,clouds or other solutions are required. Data transfer companies, previously involved in other fields that require similar large transfers e.g media companies, intelligence etc are moving to offer solutions in this area.

Additionally, with the short reads of many of the instruments, assembling genomic data, especially for de novo assembly, is another important issue requiring a bioinformatics solution.

During one of their presentations, PacBio stated that the data deluge problem wouldn’t be solved by ‘anyone in the room’, but by people who have worked for Amazon, e-Bay or Facebook. I do not know if any biostatistician or network professional (there were a few of them in the meeting) felt hurt at this, but PacBio did introduce someone (I’ve forgotten the name) they’ve recruited from Facebook to harness the data. The company’s underlying philosophy is the find a solution that flows as: data → information → knowledge → wisdom. Neat mantra, but they did not divulge exact details of how they were going to do it.

The second challenge –inherent in the PacBio philosophy described above – how do we interpret this data/information? This problem is at a biology level and basically asking, ‘how does genes relate to particular phenotype’ (the information → knowledge pipeline). This is a complicated question, and depends on good study designs. However, once again, biostatisticians/bioinformaticians are required to develop tools for solving these puzzles (hint for anyone currently considering a PhD in bio-related field).

Somewhat related to the data issue was an interesting perspective from Ellen Wright Clayton of Vanderbilt. Her somewhat controversial point was that the ‘tsunami’ of genomics data related to patients could overwhelm the healthcare system. The way I understood her contention: we do not know enough biology from the genomic information, but given that the genome information is available, patients may demand access to that data. This is particularly relevant for new-born sequencing – parents may want to access information to genomic data, without the capacity of understanding its meaning. Her point is that this is dangerous since genomic data does not translate directly into phenotypes – epigenetics, metagenomics etc play important roles (she gave example of the bad science of predictive genomics in ‘GATTACA’ as an example). Therefore, as an aid to public policy, scientists should try to analyze the genomic data as quickly as possible.

I certainly agree that interpretation of genomic data should be a priority, but as already mentioned, it is not a trivial task. Meanwhile, the release of genomic data to patients is a more complicated ethical and policy question. It is better to tackle it later in a separate blog post (anyone reading this are welcome to leave a comment about their thoughts on the issue).

Science and Applications:

Cancer biology is the most obvious target of NGS approaches. Several findings were presented where novel mutations, rearrangements etc were discovered by high-throughput sequencing, especially using the 454 system (which has a very low error-rate). Unfortunately, for most of these presentations, there was a no blogging policy. And it was difficult for me to follow many of these talks anyway, mainly due to the abundance of jargons and unexplained lingos.

The most exciting scientific breakthroughs presented at AGBT –at least for me –were with respect to human microbiomes. Rob Knight from University of Colorado gave a highly entertaining talk on this topic. Recent publications have demonstrated that the nature of bacterial fauna we carry in our gut impacts aspects of our health (in particular, obesity) and interaction with drugs. Additionally, while human beings have 99% similarity in their genomic DNA, our symbiont bacterial genome (the microbiome) could differ as much as 40%!

Therefore, the Knight lab is developing experimental and bioinformatics tools that use NGS to obtain a biogeographical landscape of microbial populations in the body (and things we touch, like computer keyboards).  Among various interesting results, it seems that different parts of our face have different microbial population (but the distribution roughly follows the facial symmetry),and these may shift over time. Another interesting tidbit was how they found that microbial populations growing in extreme environments are not necessarily genomic outliers, but some of those found in the human gut are.

Joseph Petrosino from Baylor, followed Knight with a similar talk, except he was trying to map the viral metagenome in humans. His ‘tweetable ’moment came when he talked about how to distinguish the nose and the ass by the difference in virus genomes from the two orifices. (He also mentioned how Baylor was actively involved with the Houston zoo in trying find a cure for Elephant Herpres using NGS to determine the full genome of the virus)

Both Petrosino and Knight did agree about one negative about their approach: their sequencing methods could overlook certain microbial species that have not been sequenced before or are difficult to sequence. But the conclusion from both these talks seemed to be that the goal of personalized medicine and discovery of therapeutics could come faster from the study of microbiomes rather than the human genome.

Perhaps the weirdest application of microbial genomics was presented by Zhong Wang of JGI. They are attempting to discover novel enzymes that can chew cellulose.  The approach is to perform NGS on samples directly taken from the rumen of cows fed on switch grass. To do this, they use a ‘cow with a door’,ie a fistulated cow where you can directly insert your hand and pull out stuff from inside! They seem to have found some novel cellulase through the full-scale genomics approach, and this was validated by using the cow rumen in an enzymatic assay against cellulose substrates. However, I believe they are still trying to assemble and validate the full sequences of these cellulose genes.

While on the topic of microbes, pathogen surveillance by NGS methods emerged as an area of great interest. The main advantage of NGS as opposed to current pathogen detection assays is that artificial genetic modifications can be detected, and full sequence also allows identification of the source of the material based on markers.

In this context, PacBio’s Eric Schadt presented their vision for fast sequencing using the SMRT technology to build a disease ‘weathermap’, which could help predict outbreaks. Their contention is that the rapid sequencing provided by their single-molecule technology is ideal for such purposes. The recent rapid identification of the origins of the Haitian cholera by PacBio was demonstrated as an example (I did notice though that they required the sequence information from Illumina collected by the CDC for complete assembly of that cholera genome! But more on this in the technology section). Preliminary data from sequencing of samples from two sewage treatment plants in the Bay Area, as well as swabs taken from various areas in the workplace, public areas and employees were presented as pilot studies.  This lead to discovery of H1N1 in nasal swabs of various employees before they had symptoms (though it wasn’t enough to prevent the flu in those cases). They also discovered virus genomes related to pigs and cows on the office refrigerator doors, obviously having originated from lunch meats! A lesson that hands should be washed carefully.

The grand plan, however,is to obtain such sequencing information from various public (e.g restaurants) and private places (down to the level of individual houses) rapidly collected and updated on Google maps, enabling people to take real-time decisions on avoiding certain areas! I am not really convinced of the usefulness of such a high resolution mapping of microorganisms. The logistics of such an endeavor itself is  many years off – specially with the PacBio machine, given the latter’s costs and lack of portability.

However, a broader surveillance map (at the city blocks,or even city/county level) might be handy to monitor and check disease progression.

Interest in rapid pathogen detection also comes from US government entities such as the FDA,as well as the military.  Lauren McNew from the US military’s Edgewood Chemical Biological Center spoke about how they are developing threat responses based on NGS. Their recent exercises have been based on both purified single DNA,mixed DNA and DNA spiked with bacterial organisms. They have been able to complete sample preparation, sequencing with SOLiD 454 and a complete report on the organism, its origins etc in 36 hours (and yes, apparently they did stay at the lab for 36 continuous hours. Beat that you whining grad students).

In addition, there were couple of interesting talks about the tomato genome and the genome of a pathogens infesting potatoes (same as the one that caused the Irish famine) that demonstrated how NGS approaches could be used for improving food production. Unfortunately, my own notes are sketchy here and I will again refer you to Anthony’s real-time notes here and here.

One aspect of genomic data application that I did not hear much about (perhaps were covered in some sessions I could not attend) was the use of NGS in clinical settings. While it is obvious that further development of turn-key sequencing solutions is required for sequencing to enter the doctor’s office, I was interested in learning if any current genomic data is sufficient to make diagnostic or treatment calls.

Overall, it was quite exciting to note the wide variety of current and potential applications of next-generation genomics technologies.

 

Posted in Conference and Symposiums, GWAS, Sequencing | Tagged , , , , , , , , , | 5 Comments

Picks of 2010: Biology

It is impossible to do top picks from science without revealing your bias. And I think one shouldn’t even try. I tend to read more biology based journals and more specifically molecular biology stories hold my interest and linger in memory. My picks also have a decided technical bias. But then again, this is an area I know well and know the importance of the stories that are being linked to (in no order of improtance). However, one is free to argue if they feel I have left out their favorite bit of biology out.

1. Craig Venter and company creating the first synthetic genome. They hype about “artificial life” was typical Venter. The technique, however, is impressive. Keep an eye on this field as it takes off in the next couple of years.

2. Genomics. How this area has exploded in the recent past.  Some picks from this area that made quite a splash in 2010. The sequencing of Neanderthal genome. The finding that genomes are riddled with ancient viruses. Rare variants and the genetics of autism. Ozzie Osbourne got his genome sequenced.  So can you by volunteering here. Just as companies like 23andMe were making a splash, FDA clamped down on personal genomics.

3. Optogenetics. This is the technique of 2010.

4. Ocean census - A ginormous effort that began a decade ago . But now we know something about what lives in the ocean.

5. Coloring the Dinosaur right. New evidence from fossils indicate microscopic structures that contain color pigments. This appeals to the geeky childhood years I spent coloring dinosaurs.

6. Cellular re-programming. Scientists could reprogram adult skin cells to functional nerve cells directly. This opens up the possibility for non-embroynic regenerative therapy.

7. Three parent embryo. Excellent news for people with mitochondrial defects but mired in ethical debate.

8. A. sediba discovery. And another branch added to the evolution tree.

9.  Arsenic in DNA?  The hype and following criticism should be an eye opener for any one who believes that scientists agree with findings without debate (heated debate at that). If it is wrong, over reaching conclusions are being made, methodology is not correct – you bet people will be all over that paper. Especially, if it gets publicity like this one did.

10. Imaging.  Live imaging the brain of a zebrafish embryo. Live imaging fruit fly neuroblasts reveal clues to stem cell division. And mapping the fly brain, neuron by neuron.

Posted in Top Picks | 1 Comment

The Arsenic Eating Bacteria Controversy

Events move so fast that – the cliché goes – a week is a long time in politics. Science, on the other hand, usually moves at a comparatively glacial pace – both in execution and dissemination. However, a recent exception is the hype and controversy related to the publication of a study related to bacteria that can grow in high amounts of Arsenic (As), an element hitherto unknown to support life, where blogs have contributed to an alternative, accelerated feedback mechanism.

This write-up is an attempt to briefly outline this story along with a few opinions of my own. Many excellent blogs and comments therein have helped to shape some of my thoughts on this issue.  These have been acknowledged in the post and in the ‘Further Reading’ section at the end. I hope this is also useful for those who haven’t followed the events closely (I have tried to keep the science as simple as possible).

Background:

On December 2nd, a group of astrobiology researchers led by Felisa Wolfe-Simon from NASA published a paper in the reputed journal Science, claiming to have found a ‘new form of life’ that utilized arsenic in place of phosphorus (P). It is known that life as we know it utilizes only carbon, nitrogen, oxygen, phosphorus, sulfur – and some trace metals – in its building blocks. Therefore, this research had the potential to be paradigm-shifting.

The controversy started with the fanfare accompanying the publication. Science, while making the report available to journals few days earlier, placed an embargo on its general release. Meanwhile, NASA’s public relations department got into act, calling for a press-conference to announce a major discovery that was supposed to “impact the search for evidence of extraterrestrial life”. Heady claims indeed! Unfortunately, this unnecessary secrecy – combined with the publicity – resulted in high expectation and even few wild speculations on the internet, including claims that NASA was going to announce discovery of extra-terrestrial microbial life!

In this backdrop, when Science eventually lifted the embargo (one and half hour earlier to quell further rumors) the research paper itself was bit of a let-down to scientists such as myself. What the researchers demonstrated was that they could gather bacteria from Mono Lake in California (where exists extreme living conditions) and one particular species, dubbed GFA-1, was able to grow in high concentrations of As in the media. However, it wasn’t as if the bacteria grew only in As, in fact it grew much better in P. Additionally, biochemical and biophysical experiments indicated only indirectly that the bacteria was incorporating As in proteins, lipids and most importantly, DNA.

This was a bit of – pardon the street expression – a meh. From my personal scientific point of view, the most disappointing aspect of the study were (a) the lack of proof that As actually formed part of the DNA backbone and (b) the lack of speculation by the authors about possible mechanisms for various enzyme to utilize arsenate instead of phosphate. These factors made the paper merely interesting, instead of great or ground-breaking.

Initially,  a few scientists and science journalists, mainly through blogs,  called out NASA on their claims of these research being associated with search for extra-terrestrial life or even of this bacteria being a ‘new life form’.  Even while the mainstream media was running with the extra-terrestrial angle, blogs by Ed Yong and Anirban painted a more sober picture.

Thereafter, events unfolded rapidly. In addition to doubts about the impact of the results, a number of scientists started voicing concerns – again through blogs – about the lack of scientific rigor and proper controls in some of the experiments performed. Most prominent of these were Prof Rosie Redfield – a microbiologist at University of British Columbia – and Prof. Alex Bradley, a biologist at Harvard. Additionally, noted science author Carl Zimmer got in touch with a wide range of experts to solicit their opinions and wrote two articles – one for the general audience in Slate and a follow-up at his own blog, The Loom (the latter with the detailed critiques from all the experts). Most of the expert opinions were negative, with one scientist even stating “This paper should not have been published“!

I won’t go into the technical details, but do read the blogs mentioned if you are interested in the critiques. Briefly, the main flaw of the paper seems to be the lack of actual proof of As in the DNA backbone, which can be proved/disproved by some simple experiments. Additionally, an important step in the original experiments involved placing purified bacterial DNA in water. Now if this happened with an As replaced backbone-DNA, by all known chemical mechanisms, the DNA should have disintegrated. This did not happen. Thus many experts are speculating that the bacteria could be surviving in As medium by using the tiny amounts of phosphate that contaminates As buffers. This is quite contrary to the author’s claims of an As utilizing bacteria.

Unfortunately, NASA’s response to these critiques was a bit bizarre. When asked by CBC to comment on the criticisms leveled against the paper,

[NASA spokesman] Dwayne Browne …… noted that the article was peer-reviewed and published in one of the most prestigious scientific journals. He added that Wolfe-Simon will not be responding to individual criticisms, as the agency doesn’t feel it is appropriate to debate the science using the media and bloggers. Instead, it believes that should be done in scientific publications. (link) (emphasis mine)

Reactions to this comment were predictably swift and scornful. Though there were a few voices in support, on the whole, NASA was (IMO quite correctly) castigated for dismissing blogs as a medium of scientific discussion. Particularly galling was the hypocrisy of NASA in releasing research through the press and then not wanting to be held responsible in the same public sphere!

As many people have pointed out already, NASA or the authors do not have to respond to every blog and every comment against the work. However, to disregard skepticism just because they are not ‘peer-reviewed’ is bad form and exposed a stunning old-fashioned view from an institution that is supposed to be at the cutting edge.

As things stand now, Dr. Wolfe-Simon has since posted an update on her personal blog stating they are “preparing a list of “frequently asked questions” to help promote general understanding of our work”. No word yet though on how they plan to address the critics.

My own thoughts:

Impact of the work: From my own reading and the reasoned arguments by various scientists, it is very difficult to agree with the original interpretation that As is somehow being utilized by these organisms and incorporated into their bio-molecules. If some of the experimental results do hold up to scrutiny and reproducibility, this could still be an interesting find (e.g it might find some application in As scavenging with As in ground water being such a big issue in many countries). However, the work is nothing on the scale of discovering new forms of life, much less anything to do with extraterrestrial life.

Who was wrong here: Some people have accused the authors of this paper of dishonesty or outright fraud. However, I doubt there is any evidence of that, especially given the fact that they seem to have published most of their scientific data – even data that do not support their conclusions fully. At best the authors are guilty of over-interpretation and not performing proper controls. This could and does happen to any scientist, and both pre- and post-publication peer review corrects such mistakes.

It is probably fair enough to say that the pre-publication peer-review by Science should have done a better job at catching some of these glaring errors.  It is also possible that reviewers did raise concerns but were overruled through editorial decision and over-eagerness to publish a high impact paper. But this is pure speculation.

One party that has come out looking really bad and severely tarnished its image over the incident is NASA. They have been rightly criticized in many quarters for jumping the gun and unnecessarily hyping an incomplete work, especially with the non-existent extra-terrestrial angle. As Athena Andreadis  points out, NASA has done a great disservice to the entire science community and possibly to the lead researcher on this paper.

By disbursing hype, NASA administrators handed ready-made ammunition to the already strong and growing anti-intellectual, anti-scientific groups in US society: to creationists and proponents of (un)intelligent design; to climate change denialists and young-earth biblical fundamentalists; to politicians who have been slashing everything “non-essential” (except, of course, war spending and capital gains income).  It jeopardized the still-struggling discipline of astrobiology.  And it jeopardized the future of a young scientist who is at least enthusiastic about her research even if her critical thinking needs a booster shot – or a more rigorous mentor.

Impact of blogs: If there is a silver lining in the whole incident, it is lively role taken up by blogs  of post-publication peer review (as well as the ensuing debate of the pros and cons of the role of blogs in scientific communication).

As I already mentioned, when the work was first announced, it was mostly scientific blogs such as Ed Yong’s that offered the most balanced perspective. Most of the mainstream media was focusing  (including some prominent science-related media – such as NPR’s Science Friday or Neil deGrasse Tyson’s radio show) with the alien life angle. Thereafter, all of the criticisms of the paper came through blogs. In the pre-Web2.0 days, the paper would probably have been discussed within faculty and students at various institutions – at water-coolers or more formally, at journal clubs. However, these would happen in an isolated manner, with no counterpoints being offered on such a wide scale. A wider  forum for discussions would have been scientific conferences if any of the authors presented their work there. Even then, discussions would be limited to a few questions asked from the audience. Effective criticism would only happen through letters written to Science. The peer-review by various blogs essentially sped up this process of scrutiny. And this is good for science.

Personally, I have long advocated the interactive capabilities of the internet as a pseudo-conference platform.  Such forums are much more inclusive,  and as demonstrated here, much speedier (not to mention cheap, with no travel or accommodation cost!). The main complaint is that it could draw in a lot of non-experts, or people with agenda. However, one can argue that in conventional peer-review it is easier for personal agendas to seep through, given that reviewers are mostly anonymous. Also, while trolling could happen (especially in case of politically charged topic), I’d like to think (perhaps naively) that scientists are good at ignoring illogical arguments. The comments section of the blog by Prof Redfield is a fine example, with a rich scientific discussion in progress. These include criticisms of Redfield’s critiques too (followed by counterpoints to the critiques of the critiques – as I said, a lively discussion). Of course, safeguards have to be built in too – but a full discussion of how an  interactive web for science communication would work is beyond the scope of this post.

Worth mentioning here as well that this is not the first instance where blogs took the lead in criticizing  a high impact publication – earlier in the year, a paper claiming novel high-throughput chemistry had the same experience. Those findings did not have as much mainstream audience – hence was not covered as much.

Of course, such large scale discussions are not going to happen with every paper published. This was a big impact paper and hence got the attention. However, I think this incident has got the ball rolling and got many in the community thinking about how to best leverage web-discussions efficiently for science. Prof Redfield herself has some good suggestions on her blog.

One good first step would be encouraging more scientists to blog. Currently, their number is small, but the quality of blogs written by true-blue scientists is exceedingly good. This includes a full spectrum from academic scientists such as Redfield (who also encourages her students to blog), Stephen Curry etc to those in the industry – Derek Lowe being a prominent example. Also heartening is the fact that many new science graduate students are taking to blogging (my own anecdotal observation though).

Finally, for those who continue to argue and side with NASA that scientific discussions must occur within the confines of peer-review, David Dobbs has the best answer on Wired:

[W]e should remember that the printed science journal was originally created to expand the discussion of science, allowing it to move beyond face-to-face contact at salons and saloons and meetings and into a medium more widely shared. It’s silly to now cite the printed journal’s traditions as a way to limit discussion.

Scientific debate and the wider public: An argument was made during the early part of this story that scientists should present an united front when talking to the general, lay audience, in order to not provide further ammunition to the anti-science community such a climate-change skeptics, creationists, anti-vaccination nuts and their like. However, I strongly disagree. What makes science different from religion or politics is the inherent transparency and the messy democratic process of peer-reviews and open criticisms. People who do not believe scientific evidence for evolution etc are often motivated by other political/religious factors – I doubt they will be convinced ever even if presented with clear evidence.  Scientists need not forsake openness in debate for this extreme group.  I completely agree with Dobbs (again!) that this sort of public messiness can only be good for science.

Because if the general public sees this sort of row in routine findings — if they understand that science routinely sparks arguments over data and method —  they’re less likely to see such messy debate as  sinister when they see it in something like climate science.

Lessons?

Hopefully this incident will be a good exercise in lessons learned for future scientists and institutions hoping to make a big impact on the public or scientific stage.

One also hopes that this spurs a paradigm-shift in terms of the role of blogs and other non-traditional media in scientific communication and dissemination of scientific ideas.

Further Reading:

Many of the opinions I wrote here have been expressed before. Some of the blogs and comments below have also helped to reinforce or better shape some of my own thoughts. I highly encourage reading these if you are interested in understanding the science in further depth.

  1. For absolute up to date coverage of this story, do follow Guardian’s excellent story tracker here.
  2. Ed Yong’s blog is a great resource – here is his early report and here is a post-mortem after a week.
  3. A good review initially by blogger Anirban (Bhalomanush).
  4. Carl Zimmer’s Slate article does a very good job of covering the controversy in simple terms, but also read detailed critiques by various scientists here.
  5. Prof. Rosie Redfield and Alex Bradley’s review of the original paper.
  6. Really good coverage – with a critique of the paper, and a scratching rebuke of the manner in which NASA has handled the affair by Athena Andreadis at her site.
  7. David Dobbs, Deepak Singh respond to NASA’s refusal to consider questions from blogs.
  8. David Dobbs analysis of how scientific communications should evolve and how scientific debate should be open. These were essentially the points I was trying to make, but Dobbs expresses it so much better.
  9. Finally, the paper in question itself has been made available for free, if you are interested in reading.

 

Posted in News, Science writing, Scientific Publishing/Open Access | 2 Comments

SDBN Event Thoughts

As mentioned earlier, Prithwish and I attended the San Diego Biotechnology Network (SDBN) event: “The Human Genome 10 Years Later: What Does it Mean for San Diego Biotech?”yesterday. The event was well attended and the discussion ranged from addressing issues with data storage and hype about personalized genome to drug discovery.

The event began with Paul Flook, Senior Director, Life Sciences R&D (one of the two moderators from Accelrys) giving a brief introduction to Accelrys. He also highlighted the significant events that happened in the past ten years since human genome was sequenced. He also seeded the questions for the panelists to consider,

  • How will the direct to consumer genomics market impact our economy?
  • How is genomics being used in drug discovery, and what therapeutic areas have the most promise for San Diego?
  • Will the continuing affordability of sequencing affect the landscape of companies?
  • What are the best adaptations we can make to take better advantage of the opportunities?

The panelists then introduced themselves and gave a brief talk on what their interest in HTS was. The panel consisted of:

Michael Cooke, Ph.D., Director of Immunology Discovery, Genomics Institute of the Novartis Foundation (GNF)
Kelly Frazer, Ph.D., Chief of the Division of Genome Information Sciences for the UCSD Department of Pediatrics
Tom Novak, Ph.D.,, SVP of Research, of Fate Therapeutics
Aristides A. N. Patrinos, Ph.D., President, Synthetic Genomics
Emily Winn-Deen, Ph.D.,, Vice President, Diagnostics Development at Illumina

Then began the discussion. (I was not recording this event. I took some notes but the following mostly represents my interpretation of what the panelists were saying. This is not verbatim replies from the panelists. Apologies to the panelists, if I misheard or misunderstood something)

1. Dr. Winn-Deen in her introduction had said that full genome sequencing would be less than $1000 by 2014. This, she said, would make SNP sequencing obsolete. Prithwish asked her to expand on this further since the limiting factor would still be understanding functional genomics – understanding the sequencing data.

Winn-Deen: With whole genome sequencing becoming as cheap as $1000, it would make more sense to sequence the entire genome rather than SNPs. But understanding the data will be a challenge.

Cooke: One of the questions unresolved in the field is if diseases can be mapped to several common SNPs or rare unique SNPs. So far GWAS studies have looked at the common SNPs but we have yet to explore the associations with rare SNPs. Whole genome sequencing allows one the opportunity of addressing this question.

2. Winn-Deen in her brief introduction talked about $1000 sequencing and how in the future one could envision a baby’s genome being sequenced soon after birth and stored. In future, doctors could just retrieve this data and figure out treatment plan. Couple questions – how is the data generated from HTS going to be stored? And considering a lot of cancers arise because of somatic mutations, would one time sequencing at birth be enough? Or would we have to sequence multiple times?

Frazer: One of the short comings has been the lack of discussion of where and how the data generated will be stored.  Centralized resource makes more sense than individual labs. Same for hospitals. Instead of every lab or hospital producing the data, a central core facility does the job and handles the data. Also, right now people dont have privacy concerns about their sequencing data. But as we move away from consumer genomics, we will have to address privacy concerns.

Novak: Also, it is difficult for a common person to interpret risks. Direct consumer genomics is a bad idea because the data generated is complex and risks are difficult to interpret.

Patrinos: There is a considerable immaturity in the science management to envision the problems associated with HTS and to take steps. There is need to speed up the maturation.  We could mimic what the physicists did with their big machines – central system to generate data.

3.Considering all the panel members were not trained in genomics, what  interdisciplinary areas will help develop genomics?

Winn-Deen: There is a need for a sub-speciality in medicine – someone who is trained to interpret the HTS/genomics data that will be generated.

Novak: Basic biology training has not changed in years and biologists (I think he meant molecular and not evo/eco people) are not strong in mathematics or statistics. In the early days of genomics, the biologists who left bench work did bioinformatics. But now they are superseded by people who can handle large data sets. There is a need to train biologists to do this.

Frazer: UCSD medical school will begin a 10 day lab & lecture course on genomics in the first quarter from this year. So this is the beginning of medical genomics. How is biology training changing?  The three graduate students in my lab are from computational background. It is the compsci that is changing more than biology. Biology is a strange field. Unlike Math where you peak at 25, in biology you are still learning at 40. Hence young computer science kids at 24 can be trained in biology.

Cooke: One of the important things to learn is how to communicate ideas in an interdisciplinary environment. To be able to effectively share ideas with another person so they can effectively grasp your problem and give solutions, will be important in future.

Frazer: Bigger issue in genomics is not computational. Right now, we have very little idea about genetic architecture to generate testable ideas. For example, inheritance exists but we cant test it.

(General comment by audience member:  Key point to be noted is that we cant test inheritance in humans directly and have to test in mice/flies and extrapolate to humans)

4. Moderator Flook asked if we are underestimating the accuracy of current HTS or is it good enough to deliver?

Frazer: Not enthusiastic about it. When you sequence a quartet (mother, father and 2 kids), there are so many mendelian errors that data cant be true. To go from sequencing a premature baby’s genome to treating it in hospital in say 5 years, is not possible.

Winn-Deen: Its all statistic. 99.9% accuracy of a 2 billion nucleotide there will still be mistakes. Most mistakes are generated upstream of sequencing while preparing the sample. To get 100% accurate reads will require good chemistry and sequencing.

Frazer:  There are about 70 k mistakes/person in a quartet sequencing. Sequencing error rate has to come down. At this point the one cannot go to the clinic with the data. It has to get better before we get there.

5. If the sequencing was (hypothetically) at 100%, is it good for diagnostic?

Frazer: Sequencing or where it is going to go, is very important for diagnostics. But 10 years ago GWAS was hyped up. We need a realistic view of genomics.

6. Mary: There were a lot of start ups after the genome was sequenced but economics of sequencing affected them. Now, Life Technologies and Illumina are closer to consumer genomics. Is there room for start ups or is genomics only for large companies?

Winn-Deen: Large potential for start ups- medical groups that can sequence and interpret data. Also San Diego has a vibrant biotech community and a good university. All pieces for genomics are there.

Frazer: There are areas for improvement. For example, the acquisition of Ion Torrent is more than knocking out the competition. It is about rapid turnover from getting DNA to sequencing to analyzing data. Right now it takes 3 days to generate library, 2 days to run and 3 days to align. This is not fast in a clinical setting. Ion Torrent claims to sequence 1-2 Kb strand. This shows that the technology is going to get better and faster.

Novak: The genome is not set up for drug discovery but will play role in improving clinical trials.

Patrinos: There are going to advances in non-medical field before medical genomics takes off.

7. Regarding the Central sequencing center:  Are you assuming the $100 cost is based on high initial investment or in terms of data storage?

Frazer: I just put the thought out there regarding central center. Not really sure. To sequence tumor and germline, would need a large number of machines. Machines may cost $500,000 and become obsolete in a year. So it would not be worth investing in machines for everybody. Also data storage is an issue. And data handling.

8. Is there any talk of synthesizing human genome?

Patrinos: Not even considered. Synthetic genomics has taken important but limited steps to synthesize the first microbial genome. The technology is limited so far. The Synthetic Biology field is not well funded, it is a promising but still nascent field.

A stimulating discussion as you can see. Many thanks to SDBN (particularly to Mary Canady) for hosting it.

Posted in General, News | Tagged , , | 2 Comments

Genome sequencing and San Diego – discussions at a local event tonite.

Both Sakshi and I will be at the San Diego Biotechnology Network (SDBN) event tonite: “The Human Genome 10 Years Later: What Does it Mean for San Diego Biotech?

June 2010 marks the 10th anniversary of the sequencing of the human genome. As Francis Collins has pointed out, “we invariably overestimate the short-term impacts of new technologies and underestimate their longer-term effects.” As a member of the community for 14 years, I can tell you that this is true for San Diego, as the expectations in 2000 were likely too high, but the last ten years have brought unexpected progress in many areas.

San Diego has a great ecosystem to take advantage of these exciting longer-term benefits, ranging from our expertise in creating cutting edge research tools, to ground breaking drug discoveries, to new classes of diagnostics, to the exciting new field of synthetic genomics. Of course, Craig Venter has given us a big vote of confidence by relocating here. Let’s bring together experts in an interactive environment August 18th to discuss how we can take advantage of these exciting opportunities

This should be a good discussion, especially with recent changing of landscape in the sequencing field which affects San Diego companies as well.

I have been to a couple of SDBN networking events earlier and had a lot of fun (the food served at the venue, Tango Del Rey, is yummy too!). Mary Canady of Compredia does an excellent job of hosting these events and it is a great way to meet a lot of fellow biotech professionals in the San Diego area.

We will try to live-tweet some of the salient points from the meeting and Sakshi will blog about it tomorrow. Our twitter handle is: omespeak (frankly speaking, we would not mind a few new followers :) )

Also, I will likely be volunteering at the registration desk. So if any San Diegans are reading this blog and plan to turn up at the event, do say hi!

Posted in General | Tagged , | 1 Comment

Comments: Life Technologies buys Ion Torrent

There was some exciting news yesterday from the next-generation sequencing (NGS) field, with the buy-out of Ion Torrent - a start-up that’s developed a novel DNA sequencing machine – by the life-sciences behemoth Life Technologies (for a whopping 350million dollars).

The technology for sequencing of DNA – the blueprint of all life – is at an exciting stage. Ten years ago, the complete human genome was sequenced at the cost of several billion dollars and decades of attempt. Today, a full genome can be sequenced for as little as $10,000 and in less than a week. The price and time are about to drop further with several novel platforms unveiled recently by emerging companies like Pacific Biosciences, Complete Genomics (which sells the sequencing services only), Ion Torrent etc in addition to improved platforms announced by established companies like Illumina and Life Technologies.

Ion Torrent  in particular made waves earlier this year with their new NGS platform, based on detection of the hydrogen ion that is released during the addition of a base [1],  which they claimed would be priced around $50,000 with a few hundred dollars per chip-set to run the sequencing reactions. Although Ion Torrent has not released  information about error rates, cost of prepping the samples etc, the $50K price point is game changing, especially when compared to half a million dollar for most other machines in the market.  It therefore has the potential to bring NGS technologies beyond core facilities and research consortium to individual laboratories.  Now with this acquisition by Life Technologies, the latter’s far-reaching global sales-force should be able to market the product much more effectively than the small company could by itself.

The involvement of Life Technologies, which already has a few sequencing products, is interesting, but in my opinion had almost an air of inevitability about it. Created two years ago from the merger of Invitrogen and Applied Biosystems, Life Technologies is easily one of the largest non-pharma biotechnology companies around.  As anyone who has worked in a biological lab will vouch, an Invitrogen (Life Tech) product on the lab bench is difficult to escape. Ranging from molecular biology enzymes and kits, cell biology and tissue culture products, fluorescent reagents,  to oligonucleotide synthesis  services – you name it, they have it. Invitrogen achieved this near ubiquitous status  in the life science research field through continuous mergers and acquisitions, including well-established players like Novatech, ResGen, Molecular Probes, Life Technologies/GIBCO (from where, interestingly enough it has  resuscitated the post merger name). When it merged with Applied Biosystems which had the SOLiD systems, the company got a toe-hold in the NGS field.  Seen in this context, the buying of Ion Torrent seems almost like a logical progression!

However, as pointed out by Keith Robison, it will be interesting to see how Life Technologies positions the Ion Torrent device in its product portfolio. In addition to the SOliD platform,  the company has also recently announced another NGS machine, codenamed Starlight, which uses quantum dot fluorescence for sequencing, therefore closer to systems being sold by Pacific Biosciences[1].

Overall, the deal should make for some interesting times in the NGS market for the next few years. As Mathew Harper notes on his blog, there is going to be a ‘price war….to the benefit of science’.  With about 60% market share, Illumina currently dominates the sequencing market.  From individual accounts from researchers in the field, it seems like their marketing department has made them well entrenched in the field. However, they do not seem to have any products in the pipeline to compete in the lower price end, except for its investment in Oxford Nanopore, which uses protein pores for DNA scanning – so far an unproven technology. While Pacific Bioscience and Complete Genomics are the two other major players in the horizon (another company Helicos that had optical detection system is now restructuring itself as a diagnostic company); but seemingly, the NGS field is shaping up to be a battle of the I-5 (Life Technologies is headquartered in Carlsbad, just up the freeway from Illumna in La Jolla)!

———————–

Notes and other Resources:

1. Details of some of the DNA sequencing technologies are available online at the respective company web-sites. However, do look out for a blog here which describes some of the techniques.

2. More coverage of this news by Dan McArthur, Keith Robison, Luke Timmerman and Matthew Harper, which contain further details on the Ion Torrent technology, and the economic and scientific impacts of this deal.

3.  An earlier blog which briefly profiled Jonathan Rothberg, the founder of Ion Torrent and who earlier invented the 454 technology (now owned by Roche).

Posted in New techniques, Sequencing | Tagged , , , , , | 3 Comments

NDM-1 Superbug: Some thoughts

There has been considerable outrage over the recent Lancet Infectious Diseases article [1] in the Indian media during the past 48 hours.

“Such infections can flow in from any part of the world. It’s unfair to say it originated from India,” said ICMR director Dr VM Katoch. [Link]

“This is not supported by any scientific data. This occurs in nature and in the intestines of animals and humans universally. Similar strains found in the US and UK,” said National Centre for Disease Control Director RL Ichhpujani. [Link]

Nobody ever used the term Mexican swine flu though the disease originated there,” said Dr Shalini Duggal, consultant Microbiologist, Dr BL Kapoor Memorial Hospital. [Link]

“India strongly refutes the naming of this enzyme as New Delhi metallo beta lactamase (NDM-1) and also refutes that hospitals in India are not safe for treatment, including medical tourism. [Link]

“Intellectual scientific freedom is all very good but there is a conflict of interest in this research. Researches like these are examined separately according to the code of ethics,” added Srivastav. [Link]

Few comments on these statements:

1. The gene was named New Delhi metallo-beta-lactamase 1 (NDM-1) after it was isolated from a Swedish patient of South Asian origin who was operated in New Delhi in December of 2007. The patient acquired an urinary tract infection in January 2008 caused by a carbapenem-resistant Klebsiella pneumoniae strain. This K. pneumoniae strain (and later isolated E. coli) carried a novel metallo-beta-lactamase (MBL) gene. [2]

Considering the infection was likely to be nosocomial and picked up during the operation in New Delhi, the authors named the novel gene after the place where it might have come from. No malicious intent, just simple naming protocol. This is not out of the ordinary – consider for example Ebola virus is named after the river where it was first isolated. There are other variants of the lactamase gene named after Verona(Italy), Adelaide(Australia) and Paulo (Brazil).

2. The second is the more insidious charge against the authors that since they have ties to the pharmaceutical industry, they are lying (similar charge was bought up during the H1N1 is a WHO conspiracy hoax). All authors have to declare conflict of interest when publishing in biomedical journals(*). To not do so is unethical (Wakefield study that was published in Lancet is a good example). Having a conflict of interest does not by itself imply wrong doing.

3. The main point, in my opinion, is that people are presuming this as an attack on Indian Healthcare.  The reason being the comments made by the authors in the the discussion section of the paper:

Several of the UK source patients had undergone elective, including cosmetic, surgery while visiting India or Pakistan. India also provides cosmetic surgery for other Europeans and Americans, and blaNDM-1 will likely spread worldwide. It is disturbing, in context, to read calls in the popular press for UK patients to opt for corrective surgery in India with the aim of saving the NHS money. As our data show, such a proposal might ultimately cost the NHS substantially more than the short-term saving and we would strongly advise against such proposals. The potential for wider international spread of producers and for NDM-1-encoding plasmids to become endemic worldwide, are clear and frightening. [1]

Is this an over reaching statement made by the authors? Lets see the basis on which they make this claim.

The study found NDM-1 isolates in Chennai (44), Haryana (26), UK (37) and various cities in India, Pakistan and Bangladesh (73).  The cases are  found not only in patients in a hospital but in outside community (in people with UTIs {Urinary Tract Infections}). This is what concerned the authors.  This fear is validated by a study carried out by Deshpande et al at the Hinduja hospital who found 22 (out of 24) NDM-1 positive strains in a three month study [Link]. The ease of horizontal transfer of a plasmid borne gene coupled with human globe trotting, does indeed cast a somber view on the cost effectiveness of medical tourism. As I type this post, NDM-1 has claimed it’s first fatality and so it is safe to assume that the case for concern is not overstated.

It is not just the British doctors asking for caution. The call for alarms have been raised in India for years, only to be completely ignored. Since antibiotics are wrongly prescribed, over used and unregulated in India, it is easy to generate resistant strains. Several papers have been published documenting the rise of  antibiotics resistance in the sub-continent and the need for proper usage of antibiotics [Link].

It is important to see the rising resistance of pathogens in context of drug discovery.  While novel antibiotics discovery flourished in the 1940-80, we have in the past two decades put out one new antibiotic (oxazolidones)! (The rest have been modifications of existing classes).

Also, the absence of a Central Monitoring Agency (like in UK or US) to track antimicrobial resistance in India makes it hard to track how many resistant pathogens are out there.  Many countries have a multi-pronged approach in place to deal with the growing crisis of resistant drugs [3] but India has no such strategy. As Ghafur puts it:

“The easiest way of tackling the superbug problem is to use the notorious ostrich strategy which denies the existence of the problem: stop looking for these bugs, stop looking for the hidden resistance mechanisms and closing your eyes even if you find them.” [Link]

There is nothing to be outraged about with the details in the Lancet paper but much to be concerned about. An effective way to deal with the concerns raised in the Lancet paper would be to establish a central monitoring agency and to regulate antibiotics use in India. This would not only reassure the global medical community that India too is concerned with superbugs but also bring about much needed reforms in Indian medicine.

[1] “Emergence of a new antibiotic resistance in India, Pakistan, and the UK: a molecular, biological, and epidemiological study” by Kumarasamy et. al., The Lancet Infectious Diseases, DOI: 10.1016/S1473-3099(10)70143-2

[2] “Characterization of a New Metallo-?-Lactamase Gene, blaNDM-1, and a Novel Erythromycin Esterase Gene Carried on a Unique Genetic Structure in Klebsiella pneumoniae Sequence Type 14 from India” by Yong et.al., Antimicrob Agents Chemother. 2009 December; 53(12): 5046–5054.

[3]“Moving from recommendation to implementation and audit: Part 1, Current recommendations and programs: a critical commentary” by Carbon et; al, Clin. Microbiol. Infect. (Suppl. 2); 8:92-106

Posted in Drugs, General, News | Tagged , | 3 Comments