Back To Schedule
Wednesday, October 27 • 12:30pm - 2:30pm
How to Deal with Volume and Velocity Associated with Hundreds of Terabytes (and Beyond) of Genomics Data? - Room 280

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

Feedback form is now closed.
Whole-genome shotgun sequencing (WGS) has enabled numerous breakthroughs in large-scale comparative genomics research. However, the size of genomic datasets has grown exponentially over the last few years.

This tutorial will focus on two new emerging techniques to handle the challenges associated with Volume and Velocity.

1. Repeated and Merged Bloom Filters (RAMBO) for processing hundreds of terabytes of sequence data. We will see how we index 170 TB of bacterial and virus sequences in less than 14 hours on a shared cluster at Rice, allowing searching for similar or anomalous sequences in a few milliseconds.

2. How to subsample high-velocity meta-genomics data, which keep the diversity intact. We will discuss how we can handle data that is generated at a very high rate. We will show how we can have an efficient sampling scheme roughly as fast as random sampling (RS). However, unlike RS, it preserves the diversity of the genomic pool. We will discuss how these techniques can even be pushed to the edge due to their tiny memory requirements.

Some hands-on experience on these two techniques will be provided.


Ben Coleman

Rice University

Gaurav Gupta

Rice University

Josh Engels

Rice University

Benito Geordie

Rice University

Alan Ji

Rice University

Junyan Zhang (Henry)

Rice University

Wednesday October 27, 2021 12:30pm - 2:30pm CDT
Room 280

Attendees (3)