The mood at day one of Consumer Genetics on Tuesday could maybe best be described as “cautious optimism meets a huge list of unknowns”. About 200-400 or so attendees (not actual numbers, guess based on the room size) packed a single conference room at the Hynes Convention Center. Attendance appeared to be up from last year, and based on a “suits vs. sandals” straw poll it appeared that business/finance community attendance was up.
Notably present on Day 1 was a number of people from each of the represented sequencing companies (7 in total) including an actually real life PGM from Ion Torrent (LIFE) (although notably no sample prep in site). Notably absent (for the most part) compared to past years were actual consumer genetic testing companies. While I did hear that people from Navigenics, Knome, 23 and Me, and others were present there were no commercial booths and the schedule for the conference, which had featured a breakout session in the past by these companies, is light on consumer genetics speakers in at least the first two days. Day three will feature representatives from Navigenics and Informed DNA, but the rest of the community appears to be in silent mode following the developments in consumer genetic regulation in the last year (Walgreens – Pathway Genomics and Senate hearings last summer), as well as the recent FDA letters to Lumigenix, American International Biotechnology Services and Precision Quality DNA. Dan Vorhaus of the Genomic Law Report summarized the current standing of the regulatory environment better then I could ever hope to heading into ASCO here and Daniel Macarthur discusses European actions here in Wired.
I will leave the debate around regulation to others for now, and expect further discussion on day 2 and 3 of CGC2011. I will use the rest of this post to summarize some of my thoughts about day one which was largely focused on tech and the clinical setting (Note: I could not make the morning session on “Kick Off State of the Union” so my comments there will be light). For a record of my live tweets from yesterday which have “All the throughput, accuracy, cost and turnaround time data I could grab” check out @BrighamHyde.
Top 3 comments over heard at day one of CGC2011:
- “I thought this conference was “Consumer Genomics” why is everyone talking about Personalized Medicine?”
Definitely mixed opinion both in the room and on twitter about what Consumer Genetics “is”. Most presentations had a highly clinical tilt, but I thought Heidi Rehm’s (Partners) presentation highlighted a segmentation that is underway of the genetic analysis market. To paraphrase a great slide I wish I had (as accurately as I can), right now sequencing, informatics, and interpretation are being done together; however she expects this to segment into 3 groups with interpretation representing a very high volume area as new discoveries are made. This “interpretation” piece may be the most accurate representation of what “Consumer Genetics” becomes, however the commercial opportunity for this is highly undefined (read: nearly un-fundable at the moment).
- ”The rate of discover of new variants (a la Myriad data, Partners data (H. Rehm), MHI data (M. Phillips)) creates a massive interpretation problem. What do the new variants mean clinically? Who interprets that? Where do we even go to find out? and who updates the patient after they leave the clinic when new info about their variants are discovered?”
A debate I heard a lot over the past few years is the rare variant vs. common variant as a source of disease discussion. Based on the data seen yesterday and reported earlier this year both are clearly important but we may continue to underestimate the number of important (to at least one patient) rare variants. The early results of sequencing gene panels in cardiomyopathy seems to indicate that there is significant work as well as significant opportunity in determining these accurately and implementing them into treatment paradigms.
- ”In order to do any meaningful interpretation we have to have strong (and Large) cohorts, especially to detect and interpret rare variants”
George Church as well as other continue to make this point, and given the rare variant discussion it seems clear we don’t have enough big cohorts. My question is who pays for all of this? Is the NIH going to get us all sequenced if we volunteer for the PGP? Another question is how big a study is “big enough”? I’ve heard this topic debated and there are well qualified statistical approaches in place for this, but I know every researcher has heard this comment from a reviewer or when giving a talk “You need more N”. How much N is enough?
Highlights From the Sequencing Presentations (Aka. the Steel Cage Match with a Rothberg Chaser)
In general the session was very well run by moderator Paul Morrison (DFCI) and all presenters did a great job talking about their tech in a short time frame. Check my twitter feed for the baseball card stats I was able to pull on each instrument, below is a summary for each company.
Illumina – David Bentley: The 900 pound gorilla, Illumina, did not present anything new yesterday afternoon as far as instrument specs or service offerings but did hint that today’s presentation by CEO Jay Flately may have some interesting tidbits. Illumina did outline their end to end sequencing offering for the various segments of NGS (from MiSeq to HiSeq). Performance stats look to be on track if not ahead of where expectations had been earlier this year for both MiSeq and HiSeq, and Bentley discussed briefly some of the new chemistry approaches the company has developed that could improve GC bias as well as accuracy. The key questions that seemed to remain were around MiSeq performance on launch and its role in the clinic.
Complete Genomics – Steve Lincoln: Not much new from Complete at this conference but the company continues to enhance their internal performance and has currently sequenced 1,500 genomes and has a list ASP of $5,000 per WGS. The company’s informatics approach continues to impress, and the potential of the company’s growing tumor normal database and robust understanding of complex variants (copy number, structural, other) could prove a differentiator as the database and understanding grows.
Pacific Biosciences – Steve Turner: PacBio presented its system and recent data both from the Haitian Cholera paper as well as some new studies (Including an awesome presentation of kinetic mapping of the mitochondrial genome). The company talked about long reads (13k bp) and how their approach limited error rates (raw read error rates still high, but company is convinced this is overcome by the approach). Although, typical NGS data about WGS still seems a bit lacking, you have to be impressed by what they can do on smaller genomes, particularly bacterial and viral. The combination of long reads and kinetic sequencing data has the potential to be super powerful. Sample prep development appears underway, and it will be interesting to see what comes out of the collaboration with Gen-probe on that front, potentially disruptive in the Infectious Disease testing field. While questions remain about throughput, accuracy, and the cost of the instrument the tech is undeniably cool and it seems like it will have niche.
GnuBIO – TalRaz: GnuBIO made the first major announcement of this conference yesterday announcing the launch of their prototype sequencer based on microfluidic technology. The company announced prototype placement at the MHI at the end of the month, and showed data which included >200bp reads and error rates of 0.03%. The company plans to pursue Beta sites at the end of the year and is gearing for commercial launch in 2012. It appears the company is positioning itself commercially for the targeted sequencing market, with three main differentiators, fully on board sample prep (!!!), target marker flexibility, extremely fast turnaround, and very low cost per sample. The potential of the microfluidic approach looks undeniably strong, and GnuBIO appears to be positioning itself for the Hospital setting. looking froward to seeing a “Bread Box” and more data.
Noblegen – Frank Feist: Noblegen discussed their NGS approach which combines Nanopore single molecule sequencing with Optical detection. Put simply, an electrical field pushes synthetic DNA strands tagged with beacons through nanopores, creating a flash when the beacon is released which is recorded by an optical device. Data was sparse, but a few things caught my attention; First – No amplification required – ever, the sample prep assembles long DNA strands using a ligation/digestion strategy, this is good news for future error rates. Second, Very very scalable – the combination of nanopore and optical detection means the should have a lot of head room as they develop the nanopore chip to scale. Third, low cost – Noblegen was clearly targeting the clinical setting and trying to position for a cost sensitive, error sensitive, clinical lab. Current all in end to end price per run was quoted at ~$2000. It will be very interesting to see how this develops and where the tech “Optipore Sequencing” ends up fitting in the spectrum of NGS approaches.
Nabsys – Barrett Bready: Nabsys discussed there hybridization-assisted nanopore sequencing approach. Although the talk was mostly conceptual, the approach continues to intrigue. However, data from either Nabsys or IBM (who was here last year, and also relies on a conductance/voltage readout) has been sparse. Loking forward to seeing more here from the company’s or the academic sphere
Life Technologies (Ion Torrent) – Johnathon Rothberg: A keynote speaker this year, Rothberg presented an overview of the PGM semiconductor approach to sequencing and presented a great example of the types of analysis the PGM can enable with the recent E.Coli outbreak in Germany. The idea of crowd sourcing data analysis had a cool “consumerish” feel to it and demonstrated how people will deal with and interact with this data in the near term. Stats on the PGM were presented although nothing particularly new on the Ion 316 or Ion 318 chip, however the target for the Ion 318 chip remains publicly 1Gb. Rothberg did discuss sample prep and presented somewhat improved turnaround stats around the One-touch system from LIFE. However, when pushed in the Q&A, it appeared Library prep had not been incorporated in those timing calculations and that the “sample prep” issue remains a topic for the Ion Torrent system. Rothberg did reiterate the 3 Life technologies grand Challenges, including $3MM in prize money for improvements to sample prep, through put, and analysis. While the potential of this system remains very attractive, the exact positioning in the near term, particularly in comparison to MiSeq remains a question.
Apologies for some paraphrasing in this article, any presenters feel free to comment if you feel I misrepresented anything. I will be in and out of the meeting on Day 2 please follow @rdgene @skram1 and @wimutti for live tweeting on Day 2. I hope to tweet from the Illumina Keynote this afternoon and will try and write a discussion around their presentation. Thanks for all the helpful discussion and to the organizers of CGC2011 for featuring my blog at the conference this year.