Searching Uncultivated Bacteria and Archaea (UBA) Genomes for Important Genes

Note: This is a tutorial for researchers to use an online dataset of bacterial and archaeal genomes.

Outline

  1. Introduction
  2. Searching the Uncultivated Bacteria and Archaea (UBA) dataset using HMMs
  3. Addenda: Limitations and Alternatives

Supporting files on github: https://github.com/tylerbarnum/metagenomics-tutorials

Introduction

New DNA sequencing and assembly technology provides researchers with more genomic sequences to explore than ever before. Any gene family is likely to be more diverse in metagenomes, which often contain uncultivated microorganisms, than in genomes from only cultivated microorganisms. Plus, when metagenomic sequences (“contigs” or “scaffolds” of contigs) have been sorted (“binned”) into genomes, the function and phylogeny of all genes in a microbial strain can be analyzed together. The discovery of bacteria that perform comammox (complete ammonia oxidation to nitrate) is a great example of this technology’s power.

The trouble in studying sequences from metagenomes is navigating databases that are both large and disparate. Metagenome-assembled genomes are distributed across the NCBI non-redundant protein sequences (nr) database, the Joint Genome Institute (JGI) Integrated Microbial Genomes (IMG) system, and the ggKbase system administered by Dr. Jill Banfield’s laboratory. Almost all published genomes an be found in the NCBI and/or IMG databases, and every system has a simple BLAST interface to quickly search for genes within genomes. Metagenomic sequences that have not been binned can be found in either the IMG (~10,000 metagenomes) or ggKbase (~1,000 metagenomes) databases. While a BLAST search can be performed on the entire NCBI nr and ggKbase databases, IMG limits BLAST searches to 100 genomes or metagenomes at once. Hopefully, researchers have provided annotated genomes on NCBI to be quickly queried, but sometimes they do not.

Unfortunately, one of the largest metagenomic datasets is not currently searchable. The Australian Center for Ecogenomics of the University of Queensland recently assembled and binned about 8,000 metagenome-assembled genomes from publicly available reads from over 1,500 metagenomes, which the authors refer to as the Uncultivated Bacteria and Archaea (UBA) genomes dataset. The UBA genomes substantially expanded the tree of life and may be a valuable resource for researchers who did not have the resources to obtain metagenome-assembled genomes from their datasets* or who want to perform comparative genomics of particular taxa across metagenomes.

uba-fig1
Figure 1 from Parks et. al 2017 displaying the quality of ~8,000 UBA genomes assessed by CheckM

* That being said, assembling genomes from metagenomes is easier than ever and a great opportunity for students to learn programming skills that will benefit their data preparation and analysis in other areas. I hope that the strength of metagenomic super groups does not discourage other labs from adopting this technology or publishing reads along with their research. 

While researchers can download UBA genomes from the NCBI BioProject (PRJNA348753) and annotate the genomes themselves, researchers cannot (yet*) search for genes in UBA genomes from any online portal. That is, good luck trying to find a UBA genome with an online BLAST search.

The best option available is to download UBA genomes or annotations from the author’s repository and perform a search on your computer. This tutorial describes how to do that: download and use Hidden Markov Models to search the UBA genomes for novel proteins. In theory, you can slightly alter this code to search through any number of genome annotations, such as a bulk download from JGI IMG.

Edit: An update on availability

I’ve been told that they will be uploaded to NCBI at some point, after which they will presumably be absorbed by IMG.

Additionally, Rob Edwards (UCSD) kindly informed me that the UBA genomes are available and searchable on PATRIC. My understanding is that with an account, PATRIC works similarly to IMG: word searches on all functional annotations and BLAST search on selected groups of organisms. The easiest way to search all UBA genomes is to search Genomes for “UBA,” select all, then select “Group.” You can know perform a BLAST search on the group. More documentation can be found on PATRIC’s online guide.

Screen Shot 2018-07-05 at 8.13.19 PM.png

With other options available or at some point available, consider whether or not this below workflow is the best option for you to study UBA genomes. The transferrable skills from this tutorial are in searching any number of genomes for matches to a verified set of genes. I anticipate you will find the tutorial be a valuable for other datasets – I use these same techniques repeatedly in my own metagenome projects.

Searching the Uncultivated Bacteria and Archaea (UBA) dataset using HMMs

I use Hidden Markov Models (HMMs) and the program HMMER to search for new proteins because in my experience HMMs are better than BLAST at identifying sequences that have low percent identity but nevertheless belong within a group. BLASTP aligns short stretches of a single reference sequence to score new sequences according to their shared sequence identity and a matrix of values for different amino acids substitutions. HMMER uses a Hidden Markov Model, a statistical model trained on a set of reference proteins, to score new sequences. To understand the difference between the two techniques, consider two proteins that share 40-60% amino acid identity to a set of reference proteins, yet only one protein is within the phylogenetic clade of reference proteins. BLASTP will score both proteins similarly based on % amino acid identity. HMMER, which does not directly use % amino acid identity, will score the ingroup protein higher than the outgroup protein. Essentially, because HMMER uses statistical models of proteins, it is more robust against sequence variation. (BLAST is still great for identifying the most closely related sequences).

For this tutorial, I will be identifying chlorite dismutase (cld) from a reference set of experimentally confirmed proteins. Cld, and especially periplasmic Cld (pCld), catalyzes the degradation of toxic chlorite to choride and oxygen during the anaerobic respiration of perchlorate and chlorate to chloride, and it is rather oddly conserved among bacteria performing nitrite oxidation to nitrate. You can use your own protein (sub)family or perhaps the common phylogenetic marker for metagenomic assemblies, ribosomal protein subunit S3 (download a gapless FASTA for the pfam 00189 seed), which will be found in most of these genomes.

Necessary files

  • A FASTA format file of reference proteins ending in “.faa” (e.g. pcld.faa)
  • A FASTA format file of outgroup proteins ending in “-outgroup.faa” (e.g. pcld-outgroup.faa)

Dependencies

Download annotated UBA genomes

Download and unpack the UBA genomes first, since it takes about 3 hours to complete. If your computer memory is limited or your internet connection is poor, consider downloading the annotations divided into parts, which is explained in the final section of this post. For this tutorial I am only downloading the bacterial genomes, but you can perform this same analysis on archaeal genomes with minor changes to the code (it will probably work to Find and Replace “bacteria” to “archaea” then Find and Replace “bac” to “ar”).

# Download the "readme" file, which describes available files for download 
wget https://data.ace.uq.edu.au/public/misc_downloads/uba_genomes/readme . 

# Download annotations for all UBA bacterial genomes (archaeal genomes are a separate file)
# Size: 54 Gb, Runtime: ~90 minutes
wget https://data.ace.uq.edu.au/public/misc_downloads/uba_genomes/uba_bac_prokka.tar.gz . 

# Unpack the tarball
# Size: 207 Gb, Runtime: ~90 minutes
tar -xzvf uba_bac_prokka.tar.gz

# Optional: remove unpacked tarball to save space
# rm uba_bac_prokka.tar.gz

While the genomes are being downloaded and unpacked, you may skip to “Build the Hidden Markov Model” below before returning to the next section.

Compile a single file of all proteins with headers identifying the source genome

Searching for proteins from UBA genomes is greatly simplified by combining all of the genomes’ proteins into a single file. However, it’s important to later be able to easily find proteins in their original genomes, but the headers do not clearly relate to a genome. The following for-loop command will concatenate all proteins while converting the protein headers to match its UBA genome:

# Before rename: 
# Single proteome: UBA999.faa
# >BNPHCMLN_00001 hypothetical protein

# Concatenate renamed proteins into one file
# Size: 6.3 Gb, Runtime: ~5-10 minutes
for GENOME in `ls bacteria/`; 
do sed "s|.*_|>${GENOME}_|" bacteria/${GENOME}/${GENOME}.faa | cat >> uba-bac-proteins.faa; 
done

# After rename:
# All proteomes: uba-bac-proteins.faa
# >UBA999_00001 hypothetical protein

# Optional: remove all other files to save space.
# WARNING: Only do this if you're CERTAIN you do not need the files
# rm -r bacteria/

Build the Hidden Markov Model

After installing HMMER and installing muscle, you are ready to build a Hidden Markov Model from your reference set proteins. In constructing a reference set, select a set of proteins that is as broad as possible yet only includes verified proteins. Previous studies have used HMMs from published protein families (PFAM, TIGRFAM, etc.) and/or created their own custom HMMs (e.g. Anantharaman et. al 2016, available here).  For reproducibility, I encourage you to published any custom HMMs and the sequences you used to construct them.

First, some organization. I find that it’s helpful to assign Unix variables so that code is completely interchangeable. I also suggest keeping a directory that contains all of your HMMs in one place (PATH_TO_HMM) as well as a directory that contains the reference sets (PATH_TO_REF). For this code to work, you need to edit the protein name and E-value threshold for each protein.

# Assign variable for protein name and E value threshold
# These should be the only variables you *have* to change for each protein

PROTEIN=pcld
E_VALUE=1e-50

# Assign variable for paths to reference protein sets and to HMMs
PATH_TO_HMM=HMMS
PATH_TO_REF=REFERENCE_SETS
REFERENCE=$PATH_TO_REF/$PROTEIN.faa 
OUTGROUP=$PATH_TO_REF/$PROTEIN-outgroup.faa 

# Create directories
mkdir $PATH_TO_HMM
mkdir $PATH_TO_REF
mkdir ./OUTPUT/

Now create the HMM.

# Change to the HMM directory
cd $PATH_TO_HMM
# Align reference proteins
muscle -clwstrict -in ../$REFERENCE -out $PROTEIN.aln;

# Build Hidden Markov Model
hmmbuild $PROTEIN.hmm $PROTEIN.aln

# Test your HMM by searching against your reference proteins
hmmsearch $PROTEIN.hmm $PROTEIN.faa

# Return to directory with UBA genomes
cd -

Search the UBA genomes using HMMER

The following is a simple HMMER search. You will need to calibrate the E-value (-E) or score (-T) cutoffs based on the quality and diversity of the reference set the HMM was trained on. We set those above when we indicated with protein ($PROTEIN) we’d be studying.

# Perform HMMER search using HMM and a threshold
HMM_OUT=./OUTPUT/table-${PROTEIN}.out
hmmsearch -E $E_VALUE --tblout $HMM_OUT $PATH_TO_HMM/$PROTEIN.hmm uba-bac-proteins.faa

The output contains the target name in the first column and the E-value and score in separate columns. All lines other than the targets start with a hash (#).

Screen Shot 2018-06-21 at 9.39.42 AM

Results: HMMER identified 14 proteins above my threshold E-value for Cld. These proteins are very likely Cld and were even predicted as such by prokka (see: “description of target” at right). However, we know nothing else about these proteins without further analysis.

Save protein hits from UBA genomes

Identifying novel proteins is the goal but not the last step of analysis. You will likely want to create a FASTA file of your hits so that you can compare the new proteins to old proteins.

# Collect the headers of sequences above the threshold i.e. "hits" and sort
LIST_OF_HITS=./OUTPUT/hits-${PROTEIN}.txt
awk '{print $1}' $HMM_OUT > $LIST_OF_HITS
sort $LIST_OF_HITS -o $LIST_OF_HITS

# Optional: remove all lines with "#" using sed
sed -i 's/.*#.*//' $LIST_OF_HITS

# Collect proteins using headers
# Perl code from: http://bioinformatics.cvr.ac.uk/blog/short-command-lines-for-manipulation-fastq-and-fasta-sequence-files/
HITS=./OUTPUT/hits-${PROTEIN}.faa
perl -ne 'if(/^>(\S+)/){$c=$i{$1}}$c?print:chomp;$i{$_}=1 if @ARGV' $LIST_OF_HITS uba-bac-proteins.faa > $HITS

Obtain metadata on UBA genomes of hits

Additionally, it’s helpful to know more about the genomes hosting the hits, which you can identify using the UBA ID we placed in the header of every protein in the dataset. Download Supplementary Table 2 (“Characteristics of UBA genomes”) and save it as a csv or download a copy from my github account (see: Outline).

# Create list of genome IDs appended with comma, de-depulicated, and first line removed
# Note: Any line not a full UBA ID like "UBA#####," will return extra results
LIST_OF_GENOMES=./OUTPUT/genomes-${PROTEIN}.txt
sed 's/_.*/,/' $LIST_OF_HITS > $LIST_OF_GENOMES
sort -u $LIST_OF_GENOMES -o $LIST_OF_GENOMES
echo "$(tail -n +2 $LIST_OF_GENOMES)" > $LIST_OF_GENOMES

# Create csv table of genome information from Supplementary Table 2
TABLE_OF_GENOMES=./OUTPUT/genomes-${PROTEIN}.csv
head -1 supp_table_2.csv > $TABLE_OF_GENOMES
while read LINE; do grep "$LINE" supp_table_2.csv >> $TABLE_OF_GENOMES; done <$LIST_OF_GENOMES

# You may open this in a table viewing software, or...
# View "UBA Genome ID," "NCBI Organism Name," "CheckM Completeness," "CheckM Contamination" (fields 2, 8, 5, and 6)
awk -F "\"*,\"*" '{print $2,$8,$5,$6}' $TABLE_OF_GENOMES

screen-shot-2018-06-21-at-10-37-29-am.png

Results: Cld is present in 11 genomes in the genus Nitrospira (expected) and 1 genome in the order Gallionellales (unexpected!).  The interesting genome, UBA7399 Gallionellales, is 79.3% complete and 1.46% contaminated, which is not great but not bad.

Align with reference proteins and build a phylogenetic tree

Before proceeding with your analyses, I highly suggest you create an alignment and phylogenetic tree to confirm the new proteins contain any key amino acids and have the correct phylogenetic placement. I open and view alignments and trees in graphical user interfaces, specifically AliView and MEGA.

# Align reference proteins with hits and outgroup proteins
# Note: Alignment is computationally intensive for longer/more sequences
ALL_PROTEINS=./OUTPUT/all-${PROTEIN}
cat $REFERENCE $OUTGROUP $HITS > $ALL_PROTEINS.faa
muscle -in $ALL_PROTEINS.faa -out $ALL_PROTEINS.aln

# Create tree (default settings for simplicity)
FastTree $ALL_PROTEINS.aln > $ALL_PROTEINS.tre

# Open and view the align with GUI software such as AliView 
# Open and view the tree with GUI software such as MEGA
# Edit PDF in Adobe Illustrator or InkScape if you're fancy

pcld-all-alignment.png

pcld-all-tree-2

Results: The alignment and tree are as expected. Cld from Nitrospira spp. group with Cld from previously sequenced Nitrospira. The Cld UBA7399_00885 from genome UBA7399 Gallionellales has lower sequence identity to other proteins but groups with Cld from characterized perchlorate- and chlorate-reducing bacteria, specifically those in the Betaproteobacteria (data not shown). I am assuming that Gallionellales strain is a (per)chlorate-reducing bacterium and will save its genome for later comparative studies.

Conclusions and Next Steps

With the above workflow, you have obtained:

  • An HMM model for searching proteomes
  • A list of protein hits above a significance threshold
  • Protein sequences for hits
  • Information on which UBA genomes contained hits
  • An alignment of hits with reference protein sequences
  • A phylogenetic tree of hits with reference protein sequences

Where you go from here depends on your scientific question. In most cases, you will save the specific subset of UBA genomes you’re interested in so that you can inspect those genomes further. You may want to confirm that genes on the same contig on your hit match the assigned phylogeny of the UBA genome.

For my analysis, it’s rather interesting that UBA7399 Gallionellales contains cld. A quick BLAST search of its ribosomal protein S3 sequence, found by a simple word search, supports its assignment in the Gallionellales, making it the first such taxon to contain cld.  In a first, it contains no other known genes for perchlorate reduction, chlorate reduction, or nitrite oxidation, and the genes in close proximity to cld are not informative. However, my own research has encountered the widely known fact that horizontally transferred genes, such as those for perchlorate and chlorate reduction, assemble poorly from metagenomes, and the genome is only ~80% complete. It’s helpful to remember that the maxim “absence of evidence is not evidence of absence” is especially true in genome-resolved metagenomics.

Other Examples

I don’t intend to use any of the following data but want to show a couple of additional examples.

Dissimilatory sulfite reductase alpha subunit (DsrA)

DsrA is half of the DsrAB dimer and essential for dissimilatory sulfate reduction, a widespread and important process. When using my dsrA.hmm and an E-value < 1e-100, the 7,280- genome bacterial UBA dataset contains 60 metagenome-assembled genomes with dissimilatory sulfite reductase, which are listed below. In comparison, the Anantharaman et. al 2018 paper identified dsrA in 123 metagenome-assembled genomes from over 4,000 genomes, including several phyla never before associated with dissimilatory sulfate reduction. One of those phyla, the Acidobacteria, is again represented here!

NCBI Organism Name, CheckM Completeness, CheckM Contamination

Acidobacteria bacterium UBA6911 [phylum] 86.38 5.13
Anaerolineales bacterium UBA5796 [order] 95.91 2
Deltaproteobacteria bacterium UBA3076 [class] 94.03 1.98
Deltaproteobacteria bacterium UBA5754 [class] 80.36 0.6
Desulfarculales bacterium UBA4804 [order] 91.29 1.29
Desulfatitalea sp. UBA4791 [genus] 74.09 3.55
Desulfobacter sp. UBA2225 [genus] 95.86 1.29
Desulfobacteraceae bacterium UBA2174 [family] 96.13 1.94
Desulfobacteraceae bacterium UBA2771 [family] 89.19 1.29
Desulfobacteraceae bacterium UBA4064 [family] 97.42 0.32
Desulfobacteraceae bacterium UBA4769 [family] 95.48 1.94
Desulfobacteraceae bacterium UBA5605 [family] 91.61 0.32
Desulfobacteraceae bacterium UBA5616 [family] 93.55 2.96
Desulfobacteraceae bacterium UBA5623 [family] 83.26 2.47
Desulfobacteraceae bacterium UBA5746 [family] 94.84 2.37
Desulfobacteraceae bacterium UBA5852 [family] 91.54 6.16
Desulfobulbaceae bacterium UBA2213 [family] 58.77 1.75
Desulfobulbaceae bacterium UBA2262 [family] 89.16 4.96
Desulfobulbaceae bacterium UBA2270 [family] 90.75 6.85
Desulfobulbaceae bacterium UBA2273 [family] 99.34 0.2
Desulfobulbaceae bacterium UBA2276 [family] 66.62 1.49
Desulfobulbaceae bacterium UBA4056 [family] 71.93 1.75
Desulfobulbaceae bacterium UBA5121 [family] 95.45 0.6
Desulfobulbaceae bacterium UBA5123 [family] 75.45 1.66
Desulfobulbaceae bacterium UBA5173 [family] 70.97 1.79
Desulfobulbaceae bacterium UBA5611 [family] 70.18 0.88
Desulfobulbaceae bacterium UBA5628 [family] 53.42 0
Desulfobulbaceae bacterium UBA5750 [family] 75.44 1.75
Desulfobulbaceae bacterium UBA6913 [family] 94.95 1.79
Desulfomicrobium sp. UBA5193 [genus] 98.15 0
Desulfonatronaceae bacterium UBA663 [family] 90.26 0.89
Desulfovibrio magneticus [species] 93.15 3.04
Desulfovibrio sp. UBA1335 [genus] 73.67 0.89
Desulfovibrio sp. UBA2564 [genus] 84.95 0.26
Desulfovibrio sp. UBA4079 [genus] 90.88 0.59
Desulfovibrio sp. UBA6079 [genus] 100 0
Desulfovibrio sp. UBA728 [genus] 77.15 0.3
Desulfovibrio sp. UBA7315 [genus] 54.44 0.3
Desulfovibrionaceae bacterium UBA1377 [family] 98.08 0.59
Desulfovibrionaceae bacterium UBA3823 [family] 82.63 1.18
Desulfovibrionaceae bacterium UBA3867 [family] 76.06 0.26
Desulfovibrionaceae bacterium UBA5546 [family] 100 0
Desulfovibrionaceae bacterium UBA5821 [family] 85.94 2.96
Desulfovibrionaceae bacterium UBA5842 [family] 96.91 0
Desulfovibrionaceae bacterium UBA6814 [family] 83.36 3.25
Desulfovibrionaceae bacterium UBA930 [family] 81.38 2.66
Firmicutes bacterium UBA3569 [phylum] 76.89 0.77
Firmicutes bacterium UBA3572 [phylum] 98.41 0
Firmicutes bacterium UBA3576 [phylum] 79.24 0
Firmicutes bacterium UBA911 [phylum] 81.21 0.64
Moorella sp. UBA4076 [genus] 75.09 1.57
Nitrospiraceae bacterium UBA2194 [family] 95.45 2.73
Syntrophaceae bacterium UBA2280 [family] 96.77 5.48
Syntrophaceae bacterium UBA4059 [family] 92.58 2.58
Syntrophaceae bacterium UBA5619 [family] 80.27 4.03
Syntrophaceae bacterium UBA5828 [family] 84.48 3.23
Syntrophaceae bacterium UBA6112 [family] 83.53 3.23
Thermodesulfobacteriaceae bacterium UBA6232 [family] 90.74 2.16

Cytoplasmic Nar nitrate reductase (cNarG) and periplasmic nitrate reductase (NapA)

Denitrification and dissimilatory nitrate reduction to ammonia are important processes in agriculture and just about any other system with active nitrogen redox cycling, and Nar and Nap are the primary nitrate reductases involved. A very quick glance (shows that many genomes in the UBA dataset have cnarG (776 genomes; E-value < 1e-200) or napA (361 genomes; E-value < 1e-200). This is a case where there are too many proteins to create a phylogenetic tree easily, so here’s a barplot:

nitrate-reducing-genera-01.png

Polyketide synthases

Polyketide synthases are important secondary metabolite biosynthesis proteins and tend to make the news when new ones are discovered. I very lazily copied the seed for pfam 14765, the polyketide synthase dehydratases, chose an arbitrary E-value threshold (< 1e-50) without verifying whether it was too low or too high, and aligned full-length PKS “hits” with the partial length pfam 14765 seed sequences, so this is not a correct analysis.  But there are clearly polyketide synthases in the UBA dataset. Have fun!

tree-pks.png

Addenda: Limitations and Alternatives

EDIT: A proposal for the ethical use of this dataset

Upon reflection, I felt the need to add this section. Many researchers invested significant resources into sequencing metagenomes. I believe that while the UBA dataset has done a public service by assembling so many metagenomic reads that would never be assembled and binned, many metagenomic assemblies must have been published before researchers were ready to release them. This tutorial could accelerate the “scooping.” In your use of this dataset, please avoid using information from metagenomes that have not been published and state an intent to use similar data. For example, if you are studying nitrogenase genes and find a dataset rich with interesting nitrogenase genes, I argue that it is unethical to include those genes in your analysis if the metagenome’s stated purpose is to study nitrogenases – unless they’ve already published the work. Maybe you disagree. But would you want someone to steal your dissertation research? Obviously not.

How to find this information (using the Cld example):

  1. Find the SRA accession in the field “SRA Bin ID” (e.g. SRX332015);
  2. Navigate to the webpage of the SRA found at “https://www.ncbi.nlm.nih.gov/sra/SRA#########[accn]” (e.g. SRX332015[accn])
  3. Select “BioProject” from the “Related Information” panel on the right (e.g. PRJNA214105);
  4. Inspect the description of the BioProject (e.g. no information makes me concerned about using Cld sequences or a single genome from this sample).
  5. Record which BioProjects you can use and the SRAs belonging to them.

Script

I have created a bash script, available on github, to run all of the above code after the download and  preparation of the UBA bacterial dataset. Edit the path to your HMMs and reference proteins in the script, then run using the following command:

 sh uba-hmmer-pipeline.sh <protein> <E-value threshold>

Low memory

If you do not have a computer cluster or hard drive with enough memory to handle the 267 Gb total created by this workflow, you should consider downloading and processing the dataset in parts.

Screen Shot 2018-06-21 at 2.21.31 PM

Try this nested for-loop, which should create no more than ~65 Gb at any one time and result in a ~7 Gb FASTA file of proteins. I did not test this, and you will probably need to change the directory “bacteria” to the directory created by unpacking the download in parts. You may find it easier to download one at a time manually.

# For parts 00 through 05
for NUM in 00 01 02 03 04 05;
do

# Download annotations for all UBA bacterial genomes
wget https://data.ace.uq.edu.au/public/misc_downloads/uba_genomes/uba_bac_prokka.part-$NUM . ;

# Unpack the tarball and promptly remove it to save space
cat uba_bac_prokka.part-$NUM | tar xz
rm uba_bac_prokka.part-$NUM

# Collect proteins
for GENOME in `ls bacteria/`; 
do 
sed "s|.*_|>${GENOME}_|" bacteria/${GENOME}/${GENOME}.faa | cat >> uba-bac-proteins.faa; 
done

rm -r bacteria/

done

Searching for nucleotides or other information in genomes

The above tutorial covers searching for proteins, but you can easily search this dataset for genes in nucleotide sequences, for example, using BLASTN directed at *ffn files. The available genome annotations are standard output from prokka:

  • .gff This is the master annotation in GFF3 format, containing both sequences and annotations. It can be viewed directly in Artemis or IGV.
  • .gbk This is a standard Genbank file derived from the master .gff. If the input to prokka was a multi-FASTA, then this will be a multi-Genbank, with one record for each sequence.
  • .fna Nucleotide FASTA file of the input contig sequences.
  • .faa Protein FASTA file of the translated CDS sequences.
  • .ffn Nucleotide FASTA file of all the prediction transcripts (CDS, rRNA, tRNA, tmRNA, misc_RNA)
  • .sqn An ASN1 format “Sequin” file for submission to Genbank. It needs to be edited to set the correct taxonomy, authors, related publication etc.
  • .fsa Nucleotide FASTA file of the input contig sequences, used by “tbl2asn” to create the .sqn file. It is mostly the same as the .fna file, but with extra Sequin tags in the sequence description lines.
  • .tbl Feature Table file, used by “tbl2asn” to create the .sqn file.
  • .err Unacceptable annotations – the NCBI discrepancy report.
  • .log Contains all the output that Prokka produced during its run. This is a record of what settings you used, even if the –quiet option was enabled.
  • .txt Statistics relating to the annotated features found.
  • .tsv Tab-separated file of all features: locus_tag,ftype,gene,EC_number,product

Incomplete genomes

Most of the metagenome-assembled genomes from the UBA dataset are less than 90% complete, which means they seem to be less complete than most closed genomes. As noted above, this can lead to the absence of important genes that can inform your analysis. Also, here’s how the assembly of UBA genomes is described in the methods:

Each of the 1,550 metagenomes were processed independently, with all SRA Runs within an SRA Experiment (that is, sequences from a single biological sample) being co-assembled using the CLC de novo assembler v.4.4.1 (CLCBio)

That is, single samples were used for assembly and binning. It’s known that assembling reads from multiple samples can improve the assembly of rare genomes and deteriorate the assembly of common genomes. It’s also known that differential coverage information – reads mapped from multiple samples with varying strain abundances – improves binning. In summary, it’s likely that many bins could be improved through more careful binning of a few samples. You can try to assemble and bin these genomes yourself using Anvi’o.

Genomes without protein annotations

You can obtain structural annotations for any genome sequence quickly using prodigal, which is the backbone of prokka.

prodigal -i genome.fna -a proteins.faa -d genes.ffn

Bulk download of JGI IMG proteins

IMG is like a great neighborhood bakery that only lets you buy one croissant at a time. I’m referring to the inability to search through >100 (meta)genomes at once, which I’m guessing was a conscious trade-off between better service for regular users and worse service for jerks trying to download the entire tree of life. (“Hurry up with my damn croissants!”). For our purposes, a clever workaround would be searching and downloading proteins annotated with a particular pfam, but many proteins belonging to a pfam lack the appropriate annotation. Not good if you want to identify new proteins.

In theory, you can use JGI’s instructions to download sequences (genomes, proteins, etc.) in bulk then use the above workflow to search through those proteins. Note that according to a 2013 forum post, only genomes sequenced at JGI are available for download. All other genomes are available through NCBI.

Citations

UBA genomes:

  • Parks DH, Rinke C, Chuvochina M, Chaumeil P-A, Woodcroft BJ, Evans PN, et al. (2017). Recovery of nearly 8,000 metagenome-assembled genomes substantially expands the tree of life. Nat Microbiol 2: 1533–1542.

Other genome-resolved metagenomics papers demonstrating some useful practices. Disclosure: I work closely with the Banfield lab, and one of these papers is my own.

  • Anantharaman K, Hausmann B, Jungbluth SP, Kantor RS, Lavy A, Warren LA, et al. (2018). Expanded diversity of microbial groups that shape the dissimilatory sulfur cycle. ISME J. e-pub ahead of print, doi: 10.1038/s41396-018-0078-0.
  • Anantharaman K, Brown CT, Hug LA, Sharon I, Castelle CJ, Probst AJ, et al. (2016). Thousands of microbial genomes shed light on interconnected biogeochemical processes in an aquifer system. Nat Commun 7: 13219.
  • Barnum TP, Figueroa IA, Carlström CI, Lucas LN, Engelbrektson AL, Coates JD. (2018). Genome-resolved metagenomics identifies genetic mobility, metabolic interactions, and unexpected diversity in perchlorate-reducing communities. ISME J 12: 1568–1581.
  • Crits-Christoph A, Diamond S, Butterfield CN, Thomas BC, Banfield JF. (2018). Novel soil bacteria possess diverse genes for secondary metabolite biosynthesis. Nature 558: 440–444.
Advertisements

How to Use Assembly Graphs with Metagenomic Datasets

Note: This is a tutorial to help other microbiologists learn a research method from a recent publication (EDIT: open-access PDF). This method can be applied to de novo genome and metagenome assemblies from short read sequencing data (e.g. Illumina) but is not relevant for assemblies using long read sequencing data (e.g. PacBio). 

Code In Brief

As the below article details, I recommend that all genome-resolved metagenomic studies use Bandage to inspect assembly graphs, at a minimum, to check for (1) abundant genomes that have been critically mis-assembled and (2) genes crucial to your study that may not have been binned.

To create a subgraph for #1 and render an image:

Bandage reduce k99.fastg graph.gfa --scope depthrange --mindepth 500.0 --maxdepth 1000000 --distance 3
Bandage image graph.gfa graph.png --nodewidth 50 --depwidth 1

To create a subgraph for #2:

Bandage reduce k99.fastg reduced.gfa --query keygene.fasta --scope aroundblast --evfilter 1e-20 --distance 5

The path to Bandage commands on a Mac is: /Applications/Bandage.app/Contents/MacOS/Bandage

Outline

  1. Introduction
  2. Preparing the Assembly Graph
  3. Assessing and Improving Genome Binning
  4. Recovering Specific Sequences
  5. Future Directions and Lessons
  6. Addendum: MetaCherchant

I. Introduction

The algorithms used to assemble genomes and metagenomes from short sequencing reads have a fundamental problem. When a genome contains two or more identical sequences, the algorithm will combine them into one consensus sequence instead of a producing unique contigs for every sequence. To show why this is an issue, consider what would happen if an algorithm, set to use 5-letter substrings, assembled the following sentence:

“Everything is everywhere but the environment selects.” (depth=1x)

Because the substring “every” appears twice, the assembly algorithm combines them as a consensus sequence. Consequently, the algorithm cannot determine the correct order of the sequences. The result is a fragmented sequences, with the repetitive sequence as the point of fracture:

“Every” (2x)

“thing is” (1x)

“where but the environment selects.” (1x)

Clearly, much of the meaning is lost. (Though some would argue the meaning of the sentence has been improved).

Although genome assemblies involve nucleotides and longer substrings than the example above, they suffer from the same flaw: a repetitive sequences will fragment one long sequence into several smaller sequences. Metagenome assemblies are extra complicated because related microbes can have mostly, but not entirely, identical genomes, creating mis-assemblies caused by what I will refer to as “strain variation.” The combination of repetitive sequences, caused by duplication and horizontal transfer, and strain variation, caused by conserved sequences amid varying sequences, presents two major issues for metagenome assembly. First, the mis-assembly of specific sequences which limits recovery of important genes, like the highly conserved gene for 16S ribosomal RNA. Second, the fragmentation of genomes into many smaller contigs, which limits genome recovery using assembly and binning. These issues are related – what is genome fragmentation but the mis-assembly of many sequences? – but pose different problems for different analyses.

Assembly graphs, also known as contig graphs and de Bruijn graphs, can help remedy the problems posed by mis-assembly. Analyzing assembly graphs with a tool such as Bandage (a Bioinformatics Application for Navigating De novo Assembly Graphs Easily) [1] provides crucial context for correcting mis-assemblies in a simpler format than read-mapping, a complementary tool. As shown in the below video, with assembly graphs you can quickly intuit the quality of the assembled genome and methodically diagnose mis-assemblies, such multiple copies of the 16S rRNA gene interfering with one another’s assemblies.

In a recent publication [2], I used assembly graphs of a metagenome assembly to fix mis-assembled sequences and assign them to “bins,” the collections of contigs constituting a metagenome-assembled genome. Because these mis-assembled sequences were the primary subject of the research, the publication of the study absolutely depended on assembly graphs for the recovery of the sequences. Your needs may not be as dire, but assembly graphs could still prove useful. Below, I use the published data to describe how to recover specific sequences and assess genome binning quality, which was not detailed in the publication but is likely to be helpful in almost any metagenome study.

First, a little background on the research. I study a metabolism called perchlorate reduction, in which bacteria use a modified electron transport chain to respire perchlorate (ClO4-). (How and why some bacteria get their energy from perchlorate, which rains down from the atmosphere in only the tiniest amounts, is fascinating, understudied, and not at all relevant here). Only isolated perchlorate-reducing bacteria have been studied, so not much was known about the diversity of uncultivated perchlorate reducers and how they might interact with other organisms. By performing metagenome sequencing on communities fed perchlorate, I hoped to find out which microbes reduced perchlorate and which did something else. The only problem? The genes for perchlorate reduction, perchlorate reducatase (pcr) and chlorite dismutase (cld), are horizontally transferred, so different organisms possessed highly similar sequences of the same genes. As you can now predict,  most copies of pcr and cld were divided between short contigs and impossible to accurately bin – a far too common problem in metagenomic datasets.

II. Preparing the Assembly Graph

Limitations and Strategies

Before you attempt to use assembly graphs in metagenomics, please note that currently, it is very hard to work with larger metagenomic datasets. Ryan Wick, the developer of Bandage, provides some helpful tips for dealing with large metagenomic datasets that you should read fully. I have been able to apply this technique to a metagenome of about 80 recovered genomes and 1,000 Mbp total sequence (2.6 Gb file, unpublished results), but it was exceptionally slow and frustrating. I predict that for any metagenome with >40 recovered genomes, reducing the dataset’s complexity will greatly improve the speed and utility of your assembly graphs analysis.

One strategy to reduce the complexity of the metagenomic dataset is to assemble each sample independently. Most metagenome assemblies combine all reads to increase coverage, but assembling samples individually provides its own advantages. Specifically, it both reduces the total size of each assembly (increasing Bandage’s speed) and, importantly, decreases the likelihood that a closely related genome/sequence is present (lowering the chance of mis-assembly). The cost is in coverage (fewer genomes recovered) and, in some cases, time and memory required to analyze samples one-by-one. A compromise would be to assemble similar samples together.

A complementary strategy is to subsample reads prior to assembly. Essentially, selecting a subset of reads filters out low-abundance genomes that create mis-assemblies from strain variation. The downside is that perfectly fine, less-abundant genomes are also removed. seqtk has a convenient subsampling tool.

# Subsample 1% of reads to a new file
SUB=0.01
RANDOM_STRING=100
seqtk sample -s $RANDOM_STRING f.fastq $SUB > ${SUB}_f.fastq
seqtk sample -s $RANDOM_STRING r.fastq $SUB > ${SUB}_r.fastq

Note: if you do use either of these strategies to assist with genome binning, you can use dRep to de-replicate the genomes recovered from multiple assemblies. (dRep was developed by my colleague, Matthew Olm).

A final simple strategy to reduce the complexity of the graph is to run the Bandage “reduce” command from a command line (not available within the GUI). This command subsamples the assembly graph according to specific parameters, including contigs/nodes within a certain range of read depth (caution: within-genome variation can be rather large) or within a certain distance from a BLAST hit or node of interest (caution: at high distances, e.g. >20, this can capture multiple genomes). Using a reduced graph will quickly and painlessly improve Bandage’s performance but will not correct any inherent issues in the assembly arising, for example, from strain variation.

Try a total assembly yourself, then see if you need to take extra measures to reduce the datasets’ complexity.

Creating the Assembly Graph

Bandage supports assembly graphs produced by several different assemblers (see: supported formats). I used MEGAHIT (FASTG format), which requires an additional command after assembly to generate the assembly graph. For example:

# Trim reads prior with Sickle
sickle pe -f f.fastq -r r.fastq -t sanger -o f_trimmed.fastq -p r_trimmed.fastq -s singles_trimmed.fastq q=20

# Assembly
megahit -1 f_trimmed.fastq -2 r_trimmed.fastq -o output
# MEGAHIT also has presets for different types of metagenomic datasets
# After assembly
cd output/intermediate_contigs
megahit_toolkit contig2fastg 99 k99.contigs.fa > k99.fastg
open k99.fastg

Bandage developers have supplied a sample genome assembly to inspect. If you would like to try out using assembly graph on a metagenomic dataset, you can download, trim, and assemble reads from a simple, 4-genome community observed in my study.

Scaffolding vs. Assembly Graphs

Scaffolding is a method to extend and combine contigs that produces more-complete metagenome-assembled genomes. Assembly graphs, however, are composed of contigs, not scaffolds. If you would like to use assembly graphs from a metagenomic dataset that has been scaffolded, you must parse your scaffolding output to find the contigs within each scaffold within each bin. How to find and parse this information will vary by scaffolding method. (For the scaffolding algorithm SSPACE, the relevant information is in the files “*.final.evidence” and “*.formattedcontigs_min0.fasta”). If you’re bold, you may skip scaffolding, which has worked for me on very enriched communities with fewer genomes and less strain variation.

III. Assessing and Improving Genome Binning

Visualizing assembly graphs allows you to quickly and qualitatively understand your assembly in ways that simple metrics such as N50 and total size cannot.
graph
An assembly graph of the combined assembly of five samples, totaling 612 Mbp, 746,102 nodes, and 146,832 edges.
Aside from a quality check on assembly, it can also assist in binning metagenome-assembled genomes. How? More-abundant genomes tend to have contigs linked to one another spatially in the graph. Because you’re a human and remarkable at recognizing patterns, you can use assembly graphs to:
  • Include sequences too short for binning algorithms (<2000-2500 bp).
  • Include sequences of too aberrant coverage for binning algorithms (e.g. rRNA genes)
  • Identify misbinned sequences within a contiguous assembly.
  • Identify strain variation in abundant genomes, which can inform the subsampling of samples or reads.
  • In rare cases, create complete, finished genomes.

I’ll explain each of these uses below. First, some basics. You can highlight a specific bin by selecting its contigs/nodes:

# Create list
BIN=bin001
grep '>' ${BIN}.fasta > ${BIN}.headers.txt # Select contigs
sed 's/>k99_//' ${BIN}.headers.txt > ${BIN}.nodes.txt # Remove prefix
sed -i 's/$/,/' ${BIN}.nodes.txt # Add comma to end of line
# Copy and paste list into Bandage to color/label

More comprehensively, you can color and label many bins at once using a CSV file. Here are the specifications provided to me by Ryan Wick:

The feature currently works as follows: if a column in your loaded CSV file has a header of “color” or “colour” (case insensitive), then the values in that column are treated as colours.  The values can specify the exact colour, either as 6-digit hex (e.g. #FFB6C1), an 8-digit hex with transparency (e.g. #7FD2B48C) or a standard colour name (e.g. skyblue).  If the value does not fit any of those, then it is considered a category and assigned a colour automatically.  All nodes with the same category will obviously get the same colour.  These colours are assigned as the nodes’ custom colours, not as a new colour category.

For example:

Name,color,bin,label
17986,2F6D80,Bin001,The10000thClostridia
55081,67CAD5,Bin002,CPR-HostUnknown:(

For Anvi’o users, I have written a (novice) Python script that generates the CSV file for labeling bins. (EDIT: Antti Karkman at the University of Gothenburg has provided an edited script that for contigs that have been renamed and filtered using ‘anvi-script-reformat-fasta’). The script cycles through a limited set of 12 colorblind-safe colors, since people can only discriminate between 6-8 colors at once anyway. (I suggest that you focus in on closely related organisms, since those are more likely to be spatially adjacent in assembly graphs). The result is enlightening:

graph-100
The combined assembly used in the publication [2]. Each of the 48 bins is one of 12 colors, every color is used 4 times. Unbinned sequences are in gray; these bins were below 70% complete (as assessed by CheckM) and not included in the publication.
graph-color-all
A different assembly from a well-sampled community that has undergone successive enrichments (unpublished data) and, accordingly, suffers less from strain variation. About 25 metagenome-assembled genomes were recovered. Only two suffer seriously from strain variation. Notably, this assembly did not require scaffolding. A MEGAHIT preset of meta-sensitive and final kmer size of 141 were used.

You can immediately notice which bins are near-complete with long, interrupted contigs and which bins have related genomes that interfered with assembly. If you did a good job binning, this image will be rather pleasing. It’s also a helpful educational display.

Even after rigorous binning, you can notice binning errors in the published dataset*; some sequences are indicated as unbinned (gray), probably because I excluded low quality bins from the analysis, and some bins have minor errors. Most researchers accept that any metagenome-assembled genome may have some contamination and that you cannot fix every error (one memorable quote: “you will get garbage from metagenomic assemblies”), but for those who seek be as rigorous as possible, as when identifying a new metabolism or a new phylum, this method will support your case. Below are some examples of common uses for assembly graphs in binning.

Include sequences too short for binning algorithms (<2000-2500 bp) & sequences of too aberrant coverage for binning algorithms (e.g. rRNA genes)

The short and/or abundant sequences could have been binned with the surrounding bin.

graph-depth.png

To include all nodes that are contiguous with a genome bin, you can create a reduced graph extended from the binned contigs. Be aware that some bins can be connected to dozens of others, so you should be selective.

# Remove new line characters from list of nodes
tr -d '\n' < bin001.nodes.txt > bin001.inline.nodes.txt

# Select all contiguous nodes to bin001 
Bandage reduce k99.fastg bin001.gfa --scope aroundnodes --distance 50 --nodes `cat bin001.inline.nodes.txt`
# Load the Bandage GUI, load the CSV, and reinspect the bins

Identify misbinned sequences within a contiguous assembly.

The contigs in the red bin clearly belong in the purple bin. These errors become more common when strain variation is present, as indicated by the many breaks between contigs.

graph-mis

Identify strain variation in abundant genomes, which can inform the subsampling of samples or reads.

Genome BM101 was not happy to be assembled, likely due to the presence of multiple strains in the enrichment.

graph-strain

An important aside: While CheckM determined this bin to be 98% complete with 3% contamination, as judged by conserved single copy genes, we can clearly see that many accessory genes will be interrupted in or absent from the published metagenome-assembled genome. The true completeness of the genome was overestimated because the conserved sequences of the core genome were better assembled than variable genes. Additionally, binning tends to remove genes that have been adaptively gained or lost between samples. Together, this means that the estimated completeness of metagenome-assembled genomes is greater than that of the true genome.

Unfortunately, because strain variation is essentially the recreation of slightly diverged sequences, simply using Bandage to select all the contigs/nodes of a genome affected by strain variation can produce a genome bin with many more nucleotides and genes than the true genome. In this case, the completeness estimates may underestimate the genome’s duplicity. If strain 1 and strain 2 have very different coverage, a subsampling approach as discussed above may help disentangle one from the other. A fancier-though-not-always-better approach is to (1) select all of the contigs, (2) select all reads that mapped to those contigs from your BAM file, (3) subsample that subset of reads, and (4) assemble each subsampled subset. Decide before you start how desperate you are for a clean genome.

Strain variation can also be much more benign, as in the case of these genomes from the genus Denitrovibrio which are much more well-assembled and binned.

graph-strain1

In my experience, strain variation is more likely to affect more-abundant genomes. There may be a biological reason for this, but it could simply be that less-abundant genomes do not have sufficient coverage of the minority strain to create an issue. For example, if two strains are in a ratio of 10:1 and a genome requires 4x coverage to assemble, assembly problems that are significant at 1000x coverage (1000x:100x) would be insignificant at 10x coverage (10x:1x) where the minority strain would not assemble by itself alone (1x < 4x).

In rare cases, create complete, finished genomes.

Manual assembly may have been to reduce this small CPR bacteria genome from 81 nodes to 8 nodes, which could then be scaffolded with manual read mapping.

graph-BM705

The subsampling strategies discussed above are particularly useful for recovering complete genomes.

The Importance of Assessing and Improving Genome Binning

After completing a round of automated binning, you can assess your bins to an assembly graph to diagnose problems with binning such as strain variation and inaccurate binning. For example, in one metagenome assembly, strain variation so bad that the most abundant genome was undetected in binning because it was entirely fragmented into ~1000-bp contigs. After recovering a genome by assembling samples individually, the strain was found to have a key functional role! Highlighting bins in an assembly graph is one way to identify this example.

 

graph-all-bins.png
This metagenome assembly with about 80 genomes has abundant sequences that went unbinned (gray).
graph-all-bins
The gray contigs, each of which has high read coverage but is ~1000-bp or less, should clearly have been assembled as one long contig. Instead, these sequences were filtered from the binning process.

Let this be a cautionary tale that you should at a minimum check a reduced assembly graph of the most abundant contigs for short, unbinned sequences.

# Select nodes >500x coverage and three connected nodes to maintain links
Bandage reduce k99.fastg graph.gfa --scope depthrange --mindepth 500.0 --maxdepth 1000000 --distance 3

Assembly graphs can improve most genome bins by capturing contigs too short or too aberrant in coverage. While this can consume as much time as you let it, you may be able to be more efficient and reproducible by automating processes. Alternatively, you can consider this a curation step for a genome(s) of special interest.

Note: For my publication, I relied exclusively on standard approaches for binning (i.e. Anvi’o), as most groups do, but the errors above indicate that bin recovery could have been improved by inspecting the assembly graph. Fortunately, none of the errors change the conclusions of the paper, but I still plan to update revised genomes to NCBI.

IV. Recovering Specific Sequences

Assembly graphs have a second important use: using qualitative information to assemble sequences that an algorithm cannot. Assembly graphs enabled me to manually assembly highly conserved sequences (16S rRNA genes) and horizontally transferred sequences (pcr and cld) that were crucial to my research. The goals are to merge short contigs belonging to a fragmented sequence and to link them to longer contigs that belong to genome bins. From the paper:

“Bandage . . . . was used to visualize contigs, identify BLAST hits to specific gene sequences, manually merge contigs, and relate merged contigs to a genome bin. Criteria for merging contigs were that contigs (1) shared a similar abundance and (2) lay in the same contiguous path. Merged contigs were binned when contiguity with multiple contigs from the same genome bin was certain. This strategy was also used to recover some 16S rRNA genes.”

This is particularly useful for unbinned contigs that, as you could notice in the above section, can be clearly linked to a specific genome.

Identifying and Visualizing Genes of Interest with BLAST

The first step is to identify the contigs/nodes that contain your gene of interest. You can either perform your own homology search or use the built-in BLAST annotation feature in Bandage (detailed instructions). While you may want to visualize all hits at first to understand the distribution among genomes, it is much simpler to reduce the Scope to “Around BLAST hits” or “Around [selected] nodes” and only show a short Distance of nodes. The key benefit? In a FASTA file all of these contigs appear are separate, yet in an assembly graph their spatial relationship is maintained.

graph-0
Contigs/nodes adjacent to contigs with cld (blue) and pcr (green). Length in bp is indicated. These sequences were obtained from an individually assembled sample.

Manually Merge Contigs

In Preferences, adjust Depth and Node Width to accentuate coverage differences between nodes. This helps distinguish sequences from different organisms. To merge nodes, use Output > “Specific exact path for copy/save” to input selected node sequences. (Tip: Have a list of nodes, enter one at a time, and toggle +/- until Bandage confirms the path is possible). I only merged nodes if they share the same path (“contiguous”) and had logically similar abundance. In the below example, while at much higher abundance, the 1000x contig could be a duplicated sequence flanking the selected insertion and the sequences at the bottom left.

gif-1
Manually merge contiguous contigs/nodes with equal coverage.

Remember that the sequence you recover is likely a consensus sequence. Usually, an assembler will use the higher coverage nucleotides to construct the contig, but in similarly abundant reads the result could be a mixture of nucleotide identities from two or more populations. If it matters for your study, you can perform an SNV analysis.

Relate Merged Contigs to a Genome Bin

Select a sequence from adjacent contigs and search for it among your bins. Often, as below with genomes “BM301” and “BM503,” the genome bin that the sequence of interest belongs to can be blindingly obvious: all contigs/nodes are contiguous and of similar abundance. (Note: reads can be recruited from similar sequences or unassembled sequences in your metagenome, distorting read depth).

graph-3
Nodes 49954, 23972, 34899, and 15406 were binned in genome BM301. Nodes 49174 and 39643 were binned with genome BM503. Based on coverage and contiguity, nodes were either added to bin BM301 (orange), added to bin BM503 (blue), or added to both bins (gray). Contig 44798 is a transposase that flanks the transposon interior (right) and, accordingly, is found at twice the coverage.

From the paper:

FIG_4
A portion of the assembly graph (left) and the resulting annotation (right) for each selection, bin BM301 and bin BM503.

Other times, you will be less certain of the contiguity. It is best to conservative about merging and binning sequences. When necessary, you can use read mapping or even PCR to link some contigs to one another with more certainty.

Adding Sequences to Bins

Your manual assembly might include sequences that are already present in genome bins that should be removed from the genome bin before analysis. Duplicates contigs can be identified by the name of the contigs/nodes or, in the case of scaffolded sequences, by using nucmer:

nucmer --coords -prefix bin001 bin001.fasta bandage_seqs.fasta

The Importance of Recovering Specific Sequences

This method enables you to recover and bin a consensus sequence from fragmented contigs. Crucially, with this manual assembly you can determine if more than one genome contains the same sequence. I understand any aversion to using this technique for that reason, but (1) ignoring something does not mean it is not there and (2) other techniques generate duplicity anyway, as evidenced for the need for programs like dRep.

Inspecting numerous assembly graphs has convinced me that many important gene sequences fail to be properly binned. At a minimum, I advise researchers search for connections between binned sequences and important genes that might be on unbinned sequences. You can either simply eyeball the size of the results or more accurately parse the returned contigs and compare that list to binned contigs. The following Bandage command enables such a search:

# Select portion of graph within 5 nodes BLAST hits to the key gene, with E-value filter of 1e-20
Bandage reduce k99.fastg reduced.gfa --query keygene.fasta --scope aroundblast --evfilter 1e-20 --distance 5

V. Future Directions and Lessons

Assembly graphs may only be useful for as long as de novo assemblies rely on short read sequencing. While in use, there is room for improved or alternative methods that are above my pay grade.
  • Code for updating an Anvi’o database/collection using revised binning selections from Bandage.
  • Automated refinement of binning by extending a bin’s graphs until it reaches another bin’s graph. This would allow binning to incorporate information from the assembler (note: this may not lead to improvements for all bins).
  • Using assembly graph information to assess completeness and contamination.
  • A complementary binning workflow for sequences less than <2500 bp that may constitute highly abundant genome(s) affected by strain variation. For example, all short contigs greater than 100x coverage are grouped by taxonomy. Contigs of a genome would be compared to BAM files to determine in which sample is coverage sufficiently highly and SNVs sufficiently low to enable assembly through subsampling.
  • Disentangling specific consensus sequences by using SNV coverage. For example, if 4% of sites in a sequence are variable because of two unique populations at ~10x and ~2x coverage, create two sequences using the (1) ~10x coverage SNVs and shared sites and (2) ~2x coverage SNVs and shared sites. If applied to entire genomes, this could be a potent way to fix strain variation.

What have I learned from using assembly graphs? Metagenomic datasets are tremendously valuable but also complex and fraught. The contigs output by the assembler often represent the composite of multiple strains as one genome. When the diversity among reads is sufficient enough, all or part of these composite genomes assemble poorly, destroying or dividing the genome. When gene gain or loss occurs between samples, genome binning tends to remove these accessory genes, which could very well be adaptive traits relevant to the study. Until the inherent issues are resolved by new comprehensive methods, researchers will be faced with making serious subjective decisions in the form of colorful spaghettis.

VI. Addendum: MetaCherchant

Of course, after submitting my paper and writing this blog, I found the publication describing MetaCherchant, a tool that reconstructs the genomic environment around a specific gene of interest. It is elegant, and I wish I had known about it, so I will highlight it here. MetaCherchant “builds a subgraph . . . around a target nucleotide sequence” and allows you to compare two samples to see if the genomic context of the gene has changed. MetaCherchant differs my the approach that I have described mostly in that (1) it builds a subgraph, not a graph of an entire metagenome, and (2) a specific gene of interest is used to build each graph, rather than observing multiple sequences in a single graph. Regardless, both approaches allow you to link a gene to genomes from a metagenome. Without having used the tool, I expect that MetaCherchant’s focused approach would afford some serious improvements in computational resources, time, and assembly quality if your goal is to analyze specific genes. The developers compared a total metagenome assembly from SPAdes to the localized metagenome assembly from MetaCherchant. The figure below, from their paper, shows that while the antibiotic resistance gene cfxA3 (green) to was successfully linked by both approaches (black) to the Bacteroides dorei genome, only MetaCherchant (red) linked cfxA3 to the Parabacteroides distasonis genome. That’s an important improvement in detection.

 

btx681f5-e1516390307298.png

You should definitely read their publication, though, ultimately, your goals and study design will inform the approach you choose. I plan to write another blog post comparing the approaches with my dataset.

Citing This Work

The following papers demonstrate Bandage [1] and the method and logic I used for using Bandage to recover specific sequences from metagenomic datasets [2]: