Back to Multiple platform build/check report for BioC 3.21:   simplified   long
ABCDEF[G]HIJKLMNOPQRSTUVWXYZ

This page was generated on 2025-03-20 11:40 -0400 (Thu, 20 Mar 2025).

HostnameOSArch (*)R versionInstalled pkgs
nebbiolo1Linux (Ubuntu 24.04.1 LTS)x86_64R Under development (unstable) (2025-03-13 r87965) -- "Unsuffered Consequences" 4777
palomino7Windows Server 2022 Datacenterx64R Under development (unstable) (2025-03-01 r87860 ucrt) -- "Unsuffered Consequences" 4545
lconwaymacOS 12.7.1 Montereyx86_64R Under development (unstable) (2025-03-02 r87868) -- "Unsuffered Consequences" 4576
kjohnson3macOS 13.7.1 Venturaarm64R Under development (unstable) (2025-03-02 r87868) -- "Unsuffered Consequences" 4528
kunpeng2Linux (openEuler 24.03 LTS)aarch64R Under development (unstable) (2025-02-19 r87757) -- "Unsuffered Consequences" 4458
Click on any hostname to see more info about the system (e.g. compilers)      (*) as reported by 'uname -p', except on Windows and Mac OS X

Package 896/2313HostnameOS / ArchINSTALLBUILDCHECKBUILD BIN
goSorensen 1.9.0  (landing page)
Pablo Flores
Snapshot Date: 2025-03-19 13:40 -0400 (Wed, 19 Mar 2025)
git_url: https://git.bioconductor.org/packages/goSorensen
git_branch: devel
git_last_commit: a5e228c
git_last_commit_date: 2025-03-18 19:26:14 -0400 (Tue, 18 Mar 2025)
nebbiolo1Linux (Ubuntu 24.04.1 LTS) / x86_64  OK    OK    OK  UNNEEDED, same version is already published
palomino7Windows Server 2022 Datacenter / x64  OK    OK    OK    OK  UNNEEDED, same version is already published
lconwaymacOS 12.7.1 Monterey / x86_64  OK    OK    OK    OK  UNNEEDED, same version is already published
kjohnson3macOS 13.7.1 Ventura / arm64  OK    OK    OK    OK  UNNEEDED, same version is already published
kunpeng2Linux (openEuler 24.03 LTS) / aarch64  OK    OK    OK  


CHECK results for goSorensen on nebbiolo1

To the developers/maintainers of the goSorensen package:
- Allow up to 24 hours (and sometimes 48 hours) for your latest push to git@git.bioconductor.org:packages/goSorensen.git to reflect on this report. See Troubleshooting Build Report for more information.
- Use the following Renviron settings to reproduce errors and warnings.
- If 'R CMD check' started to fail recently on the Linux builder(s) over a missing dependency, add the missing dependency to 'Suggests:' in your DESCRIPTION file. See Renviron.bioc for more information.

raw results


Summary

Package: goSorensen
Version: 1.9.0
Command: /home/biocbuild/bbs-3.21-bioc/R/bin/R CMD check --install=check:goSorensen.install-out.txt --library=/home/biocbuild/bbs-3.21-bioc/R/site-library --timings goSorensen_1.9.0.tar.gz
StartedAt: 2025-03-19 23:12:23 -0400 (Wed, 19 Mar 2025)
EndedAt: 2025-03-19 23:22:54 -0400 (Wed, 19 Mar 2025)
EllapsedTime: 631.1 seconds
RetCode: 0
Status:   OK  
CheckDir: goSorensen.Rcheck
Warnings: 0

Command output

##############################################################################
##############################################################################
###
### Running command:
###
###   /home/biocbuild/bbs-3.21-bioc/R/bin/R CMD check --install=check:goSorensen.install-out.txt --library=/home/biocbuild/bbs-3.21-bioc/R/site-library --timings goSorensen_1.9.0.tar.gz
###
##############################################################################
##############################################################################


* using log directory ‘/home/biocbuild/bbs-3.21-bioc/meat/goSorensen.Rcheck’
* using R Under development (unstable) (2025-03-13 r87965)
* using platform: x86_64-pc-linux-gnu
* R was compiled by
    gcc (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
    GNU Fortran (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
* running under: Ubuntu 24.04.2 LTS
* using session charset: UTF-8
* checking for file ‘goSorensen/DESCRIPTION’ ... OK
* checking extension type ... Package
* this is package ‘goSorensen’ version ‘1.9.0’
* package encoding: UTF-8
* checking package namespace information ... OK
* checking package dependencies ... OK
* checking if this is a source package ... OK
* checking if there is a namespace ... OK
* checking for hidden files and directories ... OK
* checking for portable file names ... OK
* checking for sufficient/correct file permissions ... OK
* checking whether package ‘goSorensen’ can be installed ... OK
* checking installed package size ... OK
* checking package directory ... OK
* checking ‘build’ directory ... OK
* checking DESCRIPTION meta-information ... OK
* checking top-level files ... OK
* checking for left-over files ... OK
* checking index information ... OK
* checking package subdirectories ... OK
* checking code files for non-ASCII characters ... OK
* checking R files for syntax errors ... OK
* checking whether the package can be loaded ... OK
* checking whether the package can be loaded with stated dependencies ... OK
* checking whether the package can be unloaded cleanly ... OK
* checking whether the namespace can be loaded with stated dependencies ... OK
* checking whether the namespace can be unloaded cleanly ... OK
* checking loading without being on the library search path ... OK
* checking dependencies in R code ... OK
* checking S3 generic/method consistency ... OK
* checking replacement functions ... OK
* checking foreign function calls ... OK
* checking R code for possible problems ... OK
* checking Rd files ... OK
* checking Rd metadata ... OK
* checking Rd cross-references ... OK
* checking for missing documentation entries ... OK
* checking for code/documentation mismatches ... OK
* checking Rd \usage sections ... OK
* checking Rd contents ... OK
* checking for unstated dependencies in examples ... OK
* checking contents of ‘data’ directory ... OK
* checking data for non-ASCII characters ... OK
* checking data for ASCII and uncompressed saves ... OK
* checking files in ‘vignettes’ ... OK
* checking examples ... OK
Examples with CPU (user + system) or elapsed time > 5s
                   user system elapsed
buildEnrichTable 56.892  1.485  58.382
enrichedIn       47.274  0.622  47.897
* checking for unstated dependencies in ‘tests’ ... OK
* checking tests ...
  Running ‘test_gosorensen_funcs.R’
 OK
* checking for unstated dependencies in vignettes ... OK
* checking package vignettes ... OK
* checking re-building of vignette outputs ... OK
* checking PDF version of manual ... OK
* DONE

Status: OK


Installation output

goSorensen.Rcheck/00install.out

##############################################################################
##############################################################################
###
### Running command:
###
###   /home/biocbuild/bbs-3.21-bioc/R/bin/R CMD INSTALL goSorensen
###
##############################################################################
##############################################################################


* installing to library ‘/home/biocbuild/bbs-3.21-bioc/R/site-library’
* installing *source* package ‘goSorensen’ ...
** this is package ‘goSorensen’ version ‘1.9.0’
** using staged installation
Warning in person1(given = given[[i]], family = family[[i]], middle = middle[[i]],  :
  Invalid ORCID iD: ‘0000-0002-4736-699’.
** R
** data
** byte-compile and prepare package for lazy loading
** help
*** installing help indices
** building package indices
** installing vignettes
** testing if installed package can be loaded from temporary location
** testing if installed package can be loaded from final location
** testing if installed package keeps a record of temporary installation path
* DONE (goSorensen)

Tests output

goSorensen.Rcheck/tests/test_gosorensen_funcs.Rout


R Under development (unstable) (2025-03-13 r87965) -- "Unsuffered Consequences"
Copyright (C) 2025 The R Foundation for Statistical Computing
Platform: x86_64-pc-linux-gnu

R is free software and comes with ABSOLUTELY NO WARRANTY.
You are welcome to redistribute it under certain conditions.
Type 'license()' or 'licence()' for distribution details.

R is a collaborative project with many contributors.
Type 'contributors()' for more information and
'citation()' on how to cite R or R packages in publications.

Type 'demo()' for some demos, 'help()' for on-line help, or
'help.start()' for an HTML browser interface to help.
Type 'q()' to quit R.

> library(goSorensen)


Attaching package: 'goSorensen'

The following object is masked from 'package:utils':

    upgrade

> 
> # A contingency table of GO terms mutual enrichment
> # between gene lists "atlas" and "sanger":
> data("cont_atlas.sanger_BP4")
> cont_atlas.sanger_BP4
                 Enriched in sanger
Enriched in atlas TRUE FALSE
            TRUE   201   212
            FALSE   29  3465
> ?cont_atlas.sanger_BP4
cont_atlas.sanger_BP4        package:goSorensen        R Documentation

_E_x_a_m_p_l_e _o_f _t_h_e _o_u_t_p_u_t _p_r_o_d_u_c_e_d _b_y _t_h_e _f_u_n_c_t_i_o_n '_b_u_i_l_d_E_n_r_i_c_h_T_a_b_l_e'. _I_t
_c_o_n_t_a_i_n_s _t_h_e _e_n_r_i_c_h_m_e_n_t _c_o_n_t_i_n_g_e_n_c_y _t_a_b_l_e _f_o_r _t_w_o _l_i_s_t_s _a_t _l_e_v_e_l _4 _o_f
_o_n_t_o_l_o_g_y _B_P.

_D_e_s_c_r_i_p_t_i_o_n:

     A contingency 2x2 table with the number of joint enriched GO terms
     (TRUE-TRUE); the number of GO terms enriched only in one list but
     not in the other one (FALSE-TRUE and TRUE-FALSE); and the number
     of GO terms not enriched in either of the two lists.

_U_s_a_g_e:

     data(cont_atlas.sanger_BP4)
     
_F_o_r_m_a_t:

     An object of class "table"

_D_e_t_a_i_l_s:

     Consider this object only as an illustrative example, which is
     valid exclusively for the lists atlas and sanger from the data
     'allOncoGeneLists' contained in this package. Note that gene
     lists, GO terms, and Bioconductor may change over time. The
     current version of these results were generated with Bioconductor
     version 3.20.


> class(cont_atlas.sanger_BP4)
[1] "table"
> 
> # Sorensen-Dice dissimilarity on this contingency table:
> ?dSorensen
dSorensen              package:goSorensen              R Documentation

_C_o_m_p_u_t_a_t_i_o_n _o_f _t_h_e _S_o_r_e_n_s_e_n-_D_i_c_e _d_i_s_s_i_m_i_l_a_r_i_t_y

_D_e_s_c_r_i_p_t_i_o_n:

     Computation of the Sorensen-Dice dissimilarity

_U_s_a_g_e:

     dSorensen(x, ...)
     
     ## S3 method for class 'table'
     dSorensen(x, check.table = TRUE, ...)
     
     ## S3 method for class 'matrix'
     dSorensen(x, check.table = TRUE, ...)
     
     ## S3 method for class 'numeric'
     dSorensen(x, check.table = TRUE, ...)
     
     ## S3 method for class 'character'
     dSorensen(x, y, check.table = TRUE, ...)
     
     ## S3 method for class 'list'
     dSorensen(x, check.table = TRUE, ...)
     
     ## S3 method for class 'tableList'
     dSorensen(x, check.table = TRUE, ...)
     
_A_r_g_u_m_e_n_t_s:

       x: either an object of class "table", "matrix" or "numeric"
          representing a 2x2 contingency table, or a "character" vector
          (a set of gene identifiers) or "list" or "tableList" object.
          See the details section for more information.

     ...: extra parameters for function 'buildEnrichTable'.

check.table: Boolean. If TRUE (default), argument 'x' is checked to
          adequately represent a 2x2 contingency table, by means of
          function 'nice2x2Table'.

       y: an object of class "character" representing a vector of valid
          gene identifiers (e.g., ENTREZ).

_D_e_t_a_i_l_s:

     Given a 2x2 arrangement of frequencies (either implemented as a
     "table", a "matrix" or a "numeric" object):

       n_{11}   n_{10} 
       n_{01}  n_{00}, 
      
     this function computes the Sorensen-Dice dissimilarity

                   { n_10 + n_01}/{2 n_11 + n_10 + n_01}.               
     
     The subindex '11' corresponds to those GO terms enriched in both
     lists, '01' to terms enriched in the second list but not in the
     first one, '10' to terms enriched in the first list but not
     enriched in the second one and '00' corresponds to those GO terms
     non enriched in both gene lists, i.e., to the double negatives, a
     value which is ignored in the computations.

     In the "numeric" interface, if 'length(x) >= 3', the values are
     interpreted as (n_11, n_01, n_10, n_00), always in this order and
     discarding extra values if necessary. The result is correct,
     regardless the frequencies being absolute or relative.

     If 'x' is an object of class "character", then 'x' (and 'y') must
     represent two "character" vectors of valid gene identifiers (e.g.,
     ENTREZ). Then the dissimilarity between lists 'x' and 'y' is
     computed, after internally summarizing them as a 2x2 contingency
     table of joint enrichment. This last operation is performed by
     function 'buildEnrichTable' and "valid gene identifiers (e.g.,
     ENTREZ)" stands for the coherency of these gene identifiers with
     the arguments 'geneUniverse' and 'orgPackg' of 'buildEnrichTable',
     passed by the ellipsis argument '...' in 'dSorensen'.

     If 'x' is an object of class "list", the argument must be a list
     of "character" vectors, each one representing a gene list
     (character identifiers). Then, all pairwise dissimilarities
     between these gene lists are computed.

     If 'x' is an object of class "tableList", the Sorensen-Dice
     dissimilarity is computed over each one of these tables. Given k
     gene lists (i.e. "character" vectors of gene identifiers) l1, l2,
     ..., lk, an object of class "tableList" (typically constructed by
     a call to function 'buildEnrichTable') is a list of lists of
     contingency tables t(i,j) generated from each pair of gene lists i
     and j, with the following structure:

     $l2

     $l2$l1$t(2,1)

     $l3

     $l3$l1$t(3,1), $l3$l2$t(3,2)

     ...

     $lk

     $lk$l1$t(k,1), $lk$l2$t(k,2), ..., $lk$l(k-1)t(k,k-1)

_V_a_l_u_e:

     In the "table", "matrix", "numeric" and "character" interfaces,
     the value of the Sorensen-Dice dissimilarity. In the "list" and
     "tableList" interfaces, the symmetric matrix of all pairwise
     Sorensen-Dice dissimilarities.

_M_e_t_h_o_d_s (_b_y _c_l_a_s_s):

        • 'dSorensen(table)': S3 method for class "table"

        • 'dSorensen(matrix)': S3 method for class "matrix"

        • 'dSorensen(numeric)': S3 method for class "numeric"

        • 'dSorensen(character)': S3 method for class "character"

        • 'dSorensen(list)': S3 method for class "list"

        • 'dSorensen(tableList)': S3 method for class "tableList"

_S_e_e _A_l_s_o:

     'buildEnrichTable' for constructing contingency tables of mutual
     enrichment, 'nice2x2Table' for checking contingency tables
     validity, 'seSorensen' for computing the standard error of the
     dissimilarity, 'duppSorensen' for the upper limit of a one-sided
     confidence interval of the dissimilarity, 'equivTestSorensen' for
     an equivalence test.

_E_x_a_m_p_l_e_s:

     # Gene lists 'atlas' and 'sanger' in 'allOncoGeneLists' dataset. Table of joint enrichment
     # of GO terms in ontology BP at level 3.
     data(cont_atlas.sanger_BP4)
     cont_atlas.sanger_BP4
     ?cont_atlas.sanger_BP4
     dSorensen(cont_atlas.sanger_BP4)
     
     # Table represented as a vector:
     conti4 <- c(56, 1, 30, 471)
     dSorensen(conti4)
     # or as a plain matrix:
     dSorensen(matrix(conti4, nrow = 2))
     
     # This function is also appropriate for proportions:
     dSorensen(conti4 / sum(conti4))
     
     conti3 <- c(56, 1, 30)
     dSorensen(conti3)
     
     # Sorensen-Dice dissimilarity from scratch, directly from two gene lists:
     # (These examples may be considerably time consuming due to many enrichment
     # tests to build the contingency tables of joint enrichment)
     # data(allOncoGeneLists)
     # ?allOncoGeneLists
     
     # Obtaining ENTREZ identifiers for the gene universe of humans:
     # library(org.Hs.eg.db)
     # humanEntrezIDs <- keys(org.Hs.eg.db, keytype = "ENTREZID")
     
     # (Time consuming, building the table requires many enrichment tests:)
     # dSorensen(allOncoGeneLists$atlas, allOncoGeneLists$sanger,
     #           onto = "BP", GOLevel = 3,
     #           geneUniverse = humanEntrezIDs, orgPackg = "org.Hs.eg.db")
     
     # Essentially, the above code makes the same as:
     # cont_atlas.sanger_BP4 <- buildEnrichTable(allOncoGeneLists$atlas, allOncoGeneLists$sanger,
     #                                     onto = "BP", GOLevel = 4,
     #                                     geneUniverse = humanEntrezIDs, orgPackg = "org.Hs.eg.db")
     # dSorensen(cont_atlas.sanger_BP4)
     # (Quite time consuming, all pairwise dissimilarities:)
     # dSorensen(allOncoGeneLists,
     #           onto = "BP", GOLevel = 4,
     #           geneUniverse = humanEntrezIDs, orgPackg = "org.Hs.eg.db")
     

> dSorensen(cont_atlas.sanger_BP4)
[1] 0.3748056
> 
> # Standard error of this Sorensen-Dice dissimilarity estimate:
> ?seSorensen
seSorensen             package:goSorensen              R Documentation

_S_t_a_n_d_a_r_d _e_r_r_o_r _o_f _t_h_e _s_a_m_p_l_e _S_o_r_e_n_s_e_n-_D_i_c_e _d_i_s_s_i_m_i_l_a_r_i_t_y, _a_s_y_m_p_t_o_t_i_c
_a_p_p_r_o_a_c_h

_D_e_s_c_r_i_p_t_i_o_n:

     Standard error of the sample Sorensen-Dice dissimilarity,
     asymptotic approach

_U_s_a_g_e:

     seSorensen(x, ...)
     
     ## S3 method for class 'table'
     seSorensen(x, check.table = TRUE, ...)
     
     ## S3 method for class 'matrix'
     seSorensen(x, check.table = TRUE, ...)
     
     ## S3 method for class 'numeric'
     seSorensen(x, check.table = TRUE, ...)
     
     ## S3 method for class 'character'
     seSorensen(x, y, check.table = TRUE, ...)
     
     ## S3 method for class 'list'
     seSorensen(x, check.table = TRUE, ...)
     
     ## S3 method for class 'tableList'
     seSorensen(x, check.table = TRUE, ...)
     
_A_r_g_u_m_e_n_t_s:

       x: either an object of class "table", "matrix" or "numeric"
          representing a 2x2 contingency table, or a "character" (a set
          of gene identifiers) or "list" or "tableList" object. See the
          details section for more information.

     ...: extra parameters for function 'buildEnrichTable'.

check.table: Boolean. If TRUE (default), argument 'x' is checked to
          adequately represent a 2x2 contingency table. This checking
          is performed by means of function 'nice2x2Table'.

       y: an object of class "character" representing a vector of gene
          identifiers (e.g., ENTREZ).

_D_e_t_a_i_l_s:

     This function computes the standard error estimate of the sample
     Sorensen-Dice dissimilarity, given a 2x2 arrangement of
     frequencies (either implemented as a "table", a "matrix" or a
     "numeric" object):

       n_{11}   n_{10} 
       n_{01}  n_{00}, 
      
     The subindex '11' corresponds to those GO terms enriched in both
     lists, '01' to terms enriched in the second list but not in the
     first one, '10' to terms enriched in the first list but not
     enriched in the second one and '00' corresponds to those GO terms
     non enriched in both gene lists, i.e., to the double negatives, a
     value which is ignored in the computations.

     In the "numeric" interface, if 'length(x) >= 3', the values are
     interpreted as (n_11, n_01, n_10), always in this order.

     If 'x' is an object of class "character", then 'x' (and 'y') must
     represent two "character" vectors of valid gene identifiers (e.g.,
     ENTREZ). Then the standard error for the dissimilarity between
     lists 'x' and 'y' is computed, after internally summarizing them
     as a 2x2 contingency table of joint enrichment. This last
     operation is performed by function 'buildEnrichTable' and "valid
     gene identifiers (e.g., ENTREZ)" stands for the coherency of these
     gene identifiers with the arguments 'geneUniverse' and 'orgPackg'
     of 'buildEnrichTable', passed by the ellipsis argument '...' in
     'seSorensen'.

     In the "list" interface, the argument must be a list of
     "character" vectors, each one representing a gene list (character
     identifiers). Then, all pairwise standard errors of the
     dissimilarity between these gene lists are computed.

     If 'x' is an object of class "tableList", the standard error of
     the Sorensen-Dice dissimilarity estimate is computed over each one
     of these tables. Given k gene lists (i.e. "character" vectors of
     gene identifiers) l1, l2, ..., lk, an object of class "tableList"
     (typically constructed by a call to function 'buildEnrichTable')
     is a list of lists of contingency tables t(i,j) generated from
     each pair of gene lists i and j, with the following structure:

     $l2

     $l2$l1$t(2,1)

     $l3

     $l3$l1$t(3,1), $l3$l2$t(3,2)

     ...

     $lk

     $lk$l1$t(k,1), $lk$l2$t(k,2), ..., $lk$l(k-1)t(k,k-1)

_V_a_l_u_e:

     In the "table", "matrix", "numeric" and "character" interfaces,
     the value of the standard error of the Sorensen-Dice dissimilarity
     estimate. In the "list" and "tableList" interfaces, the symmetric
     matrix of all standard error dissimilarity estimates.

_M_e_t_h_o_d_s (_b_y _c_l_a_s_s):

        • 'seSorensen(table)': S3 method for class "table"

        • 'seSorensen(matrix)': S3 method for class "matrix"

        • 'seSorensen(numeric)': S3 method for class "numeric"

        • 'seSorensen(character)': S3 method for class "character"

        • 'seSorensen(list)': S3 method for class "list"

        • 'seSorensen(tableList)': S3 method for class "tableList"

_S_e_e _A_l_s_o:

     'buildEnrichTable' for constructing contingency tables of mutual
     enrichment, 'nice2x2Table' for checking the validity of enrichment
     contingency tables, 'dSorensen' for computing the Sorensen-Dice
     dissimilarity, 'duppSorensen' for the upper limit of a one-sided
     confidence interval of the dissimilarity, 'equivTestSorensen' for
     an equivalence test.

_E_x_a_m_p_l_e_s:

     # Gene lists 'atlas' and 'sanger' in 'allOncoGeneLists' dataset. Table of joint enrichment
     # of GO terms in ontology BP at level 4.
     data(cont_atlas.sanger_BP4)
     cont_atlas.sanger_BP4
     dSorensen(cont_atlas.sanger_BP4)
     seSorensen(cont_atlas.sanger_BP4)
     
     # Contingency table as a numeric vector:
     seSorensen(c(56, 1, 30, 47))
     seSorensen(c(56, 1, 30))
     
     # (These examples may be considerably time consuming due to many enrichment
     # tests to build the contingency tables of mutual enrichment)
     # data(allOncoGeneLists)
     # ?allOncoGeneLists
     
     # Standard error of the sample Sorensen-Dice dissimilarity, directly from
     # two gene lists, from scratch:
     # seSorensen(allOncoGeneLists$atlas, allOncoGeneLists$sanger,
     #            onto = "BP", GOLevel = 3,
     #            geneUniverse = humanEntrezIDs, orgPackg = "org.Hs.eg.db")
     # Essentially, the above code makes the same as:
     # cont_atlas.sanger_BP4 <- buildEnrichTable(allOncoGeneLists$atlas, allOncoGeneLists$sanger,
     #                                     onto = "BP", GOLevel = 4,
     #                                     geneUniverse = humanEntrezIDs, orgPackg = "org.Hs.eg.db")
     # cont_atlas.sanger_BP4
     # seSorensen(cont_atlas.sanger_BP4)
     
     # All pairwise standard errors (quite time consuming):
     # seSorensen(allOncoGeneLists,
     #            onto = "BP", GOLevel = 4,
     #            geneUniverse = humanEntrezIDs, orgPackg = "org.Hs.eg.db")
     

> seSorensen(cont_atlas.sanger_BP4)
[1] 0.02240875
> 
> # Upper 95% confidence limit for the Sorensen-Dice dissimilarity:
> ?duppSorensen
duppSorensen            package:goSorensen             R Documentation

_U_p_p_e_r _l_i_m_i_t _o_f _a _o_n_e-_s_i_d_e_d _c_o_n_f_i_d_e_n_c_e _i_n_t_e_r_v_a_l (_0, _d_U_p_p] _f_o_r _t_h_e
_S_o_r_e_n_s_e_n-_D_i_c_e _d_i_s_s_i_m_i_l_a_r_i_t_y

_D_e_s_c_r_i_p_t_i_o_n:

     Upper limit of a one-sided confidence interval (0, dUpp] for the
     Sorensen-Dice dissimilarity

_U_s_a_g_e:

     duppSorensen(x, ...)
     
     ## S3 method for class 'table'
     duppSorensen(
       x,
       dis = dSorensen.table(x, check.table = FALSE),
       se = seSorensen.table(x, check.table = FALSE),
       conf.level = 0.95,
       z.conf.level = qnorm(1 - conf.level),
       boot = FALSE,
       nboot = 10000,
       check.table = TRUE,
       ...
     )
     
     ## S3 method for class 'matrix'
     duppSorensen(
       x,
       dis = dSorensen.matrix(x, check.table = FALSE),
       se = seSorensen.matrix(x, check.table = FALSE),
       conf.level = 0.95,
       z.conf.level = qnorm(1 - conf.level),
       boot = FALSE,
       nboot = 10000,
       check.table = TRUE,
       ...
     )
     
     ## S3 method for class 'numeric'
     duppSorensen(
       x,
       dis = dSorensen.numeric(x, check.table = FALSE),
       se = seSorensen.numeric(x, check.table = FALSE),
       conf.level = 0.95,
       z.conf.level = qnorm(1 - conf.level),
       boot = FALSE,
       nboot = 10000,
       check.table = TRUE,
       ...
     )
     
     ## S3 method for class 'character'
     duppSorensen(
       x,
       y,
       conf.level = 0.95,
       boot = FALSE,
       nboot = 10000,
       check.table = TRUE,
       ...
     )
     
     ## S3 method for class 'list'
     duppSorensen(
       x,
       conf.level = 0.95,
       boot = FALSE,
       nboot = 10000,
       check.table = TRUE,
       ...
     )
     
     ## S3 method for class 'tableList'
     duppSorensen(
       x,
       conf.level = 0.95,
       boot = FALSE,
       nboot = 10000,
       check.table = TRUE,
       ...
     )
     
_A_r_g_u_m_e_n_t_s:

       x: either an object of class "table", "matrix" or "numeric"
          representing a 2x2 contingency table, or a "character" (a set
          of gene identifiers) or "list" or "tableList" object. See the
          details section for more information.

     ...: additional arguments for function 'buildEnrichTable'.

     dis: Sorensen-Dice dissimilarity value. Only required to speed
          computations if this value is known in advance.

      se: standard error estimate of the sample dissimilarity. Only
          required to speed computations if this value is known in
          advance.

conf.level: confidence level of the one-sided confidence interval, a
          numeric value between 0 and 1.

z.conf.level: standard normal (or bootstrap, see arguments below)
          distribution quantile at the '1 - conf.level' value. Only
          required to speed computations if this value is known in
          advance. Then, the argument 'conf.level' is ignored.

    boot: boolean. If TRUE, 'z.conf.level' is computed by means of a
          bootstrap approach instead of the asymptotic normal approach.
          Defaults to FALSE.

   nboot: numeric, number of initially planned bootstrap replicates.
          Ignored if 'boot == FALSE'. Defaults to 10000.

check.table: Boolean. If TRUE (default), argument 'x' is checked to
          adequately represent a 2x2 contingency table. This checking
          is performed by means of function 'nice2x2Table'.

       y: an object of class "character" representing a vector of gene
          identifiers (e.g., ENTREZ).

_D_e_t_a_i_l_s:

     This function computes the upper limit of a one-sided confidence
     interval for the Sorensen-Dice dissimilarity, given a 2x2
     arrangement of frequencies (either implemented as a "table", a
     "matrix" or a "numeric" object):

       n_{11}   n_{10} 
       n_{01}  n_{00}, 
      
     The subindex '11' corresponds to those GO terms enriched in both
     lists, '01' to terms enriched in the second list but not in the
     first one, '10' to terms enriched in the first list but not
     enriched in the second one and '00' corresponds to those GO terms
     non enriched in both gene lists, i.e., to the double negatives, a
     value which is ignored in the computations, except if 'boot ==
     TRUE'.

     In the "numeric" interface, if 'length(x) >= 4', the values are
     interpreted as (n_11, n_01, n_10, n_00), always in this order and
     discarding extra values if necessary.

     Arguments 'dis', 'se' and 'z.conf.level' are not required. If
     known in advance (e.g., as a consequence of previous computations
     with the same data), providing its value may speed the
     computations.

     By default, 'z.conf.level' corresponds to the 1 - conf.level
     quantile of a standard normal N(0,1) distribution, as the
     studentized statistic (^d - d) / ^se) is asymptotically N(0,1). In
     the studentized statistic, d stands for the "true" Sorensen-Dice
     dissimilarity, ^d to its sample estimate and ^se for the estimate
     of its standard error. In fact, the normal is its limiting
     distribution but, for finite samples, the true sampling
     distribution may present departures from normality (mainly with
     some inflation in the left tail). The bootstrap method provides a
     better approximation to the true sampling distribution. In the
     bootstrap approach, 'nboot' new bootstrap contingency tables are
     generated from a multinomial distribution with parameters 'size ='
     n11 + n01 + n10 + n00 and probabilities %. Sometimes, some of
     these generated tables may present so low frequencies of
     enrichment that make them unable for Sorensen-Dice computations.
     As a consequence, the number of effective bootstrap samples may be
     lower than the number of initially planned bootstrap samples
     'nboot'. Computing in advance the value of argument 'z.conf.level'
     may be a way to cope with these departures from normality, by
     means of a more adequate quantile function. Alternatively, if
     'boot == TRUE', a bootstrap quantile is internally computed.

     If 'x' is an object of class "character", then 'x' (and 'y') must
     represent two "character" vectors of valid gene identifiers (e.g.,
     ENTREZ). Then the confidence interval for the dissimilarity
     between lists 'x' and 'y' is computed, after internally
     summarizing them as a 2x2 contingency table of joint enrichment.
     This last operation is performed by function 'buildEnrichTable'
     and "valid gene identifiers (e.g., ENTREZ)" stands for the
     coherency of these gene identifiers with the arguments
     'geneUniverse' and 'orgPackg' of 'buildEnrichTable', passed by the
     ellipsis argument '...' in 'dUppSorensen'.

     In the "list" interface, the argument must be a list of
     "character" vectors, each one representing a gene list (character
     identifiers). Then, all pairwise upper limits of the dissimilarity
     between these gene lists are computed.

     In the "tableList" interface, the upper limits are computed over
     each one of these tables. Given gene lists (i.e. "character"
     vectors of gene identifiers) l1, l2, ..., lk, an object of class
     "tableList" (typically constructed by a call to function
     'buildEnrichTable') is a list of lists of contingency tables
     t(i,j) generated from each pair of gene lists i and j, with the
     following structure:

     $l2

     $l2$l1$t(2,1)

     $l3

     $l3$l1$t(3,1), $l3$l2$t(3,2)

     ...

     $lk

     $lk$l1$t(k,1), $lk$l2$t(k,2), ..., $lk$l(k-1)t(k,k-1)

_V_a_l_u_e:

     In the "table", "matrix", "numeric" and "character" interfaces,
     the value of the Upper limit of the confidence interval for the
     Sorensen-Dice dissimilarity. When 'boot == TRUE', this result also
     haves a an extra attribute: "eff.nboot" which corresponds to the
     number of effective bootstrap replicats, see the details section.
     In the "list" and "tableList" interfaces, the result is the
     symmetric matrix of all pairwise upper limits.

_M_e_t_h_o_d_s (_b_y _c_l_a_s_s):

        • 'duppSorensen(table)': S3 method for class "table"

        • 'duppSorensen(matrix)': S3 method for class "matrix"

        • 'duppSorensen(numeric)': S3 method for class "numeric"

        • 'duppSorensen(character)': S3 method for class "character"

        • 'duppSorensen(list)': S3 method for class "list"

        • 'duppSorensen(tableList)': S3 method for class "tableList"

_S_e_e _A_l_s_o:

     'buildEnrichTable' for constructing contingency tables of mutual
     enrichment, 'nice2x2Table' for checking contingency tables
     validity, 'dSorensen' for computing the Sorensen-Dice
     dissimilarity, 'seSorensen' for computing the standard error of
     the dissimilarity, 'equivTestSorensen' for an equivalence test.

_E_x_a_m_p_l_e_s:

     # Gene lists 'atlas' and 'sanger' in 'Cangenes' dataset. Table of joint enrichment
     # of GO terms in ontology BP at level 3.
     data(cont_atlas.sanger_BP4)
     ?cont_atlas.sanger_BP4
     duppSorensen(cont_atlas.sanger_BP4)
     dSorensen(cont_atlas.sanger_BP4) + qnorm(0.95) * seSorensen(cont_atlas.sanger_BP4)
     # Using the bootstrap approximation instead of the normal approximation to
     # the sampling distribution of (^d - d) / se(^d):
     duppSorensen(cont_atlas.sanger_BP4, boot = TRUE)
     
     # Contingency table as a numeric vector:
     duppSorensen(c(56, 1, 30, 47))
     duppSorensen(c(56, 1, 30))
     
     # Upper confidence limit for the Sorensen-Dice dissimilarity, from scratch,
     # directly from two gene lists:
     # (These examples may be considerably time consuming due to many enrichment
     # tests to build the contingency tables of mutual enrichment)
     # data(allOncoGeneLists)
     # ?allOncoGeneLists
     
     # Obtaining ENTREZ identifiers for the gene universe of humans:
     # library(org.Hs.eg.db)
     # humanEntrezIDs <- keys(org.Hs.eg.db, keytype = "ENTREZID")
     
     # Computing the Upper confidence limit:
     # duppSorensen(allOncoGeneLists$atlas, allOncoGeneLists$sanger,
     #              onto = "CC", GOLevel = 5,
     #              geneUniverse = humanEntrezIDs, orgPackg = "org.Hs.eg.db")
     # Even more time consuming (all pairwise values):
     # duppSorensen(allOncoGeneLists,
     #              onto = "CC", GOLevel = 5,
     #              geneUniverse = humanEntrezIDs, orgPackg = "org.Hs.eg.db")
     

> duppSorensen(cont_atlas.sanger_BP4)
[1] 0.4116647
> # This confidence limit is based on an assimptotic normal N(0,1)
> # approximation to the distribution of (dSampl - d) / se, where
> # dSampl stands for the sample dissimilarity, d for the true dissimilarity
> # and se for the sample dissimilarity standard error estimate.
> 
> # Upper confidence limit but using a Student's t instead of a N(0,1)
> # (just as an example, not recommended -no theoretical justification)
> df <- sum(cont_atlas.sanger_BP4[1:3]) - 2
> duppSorensen(cont_atlas.sanger_BP4, z.conf.level = qt(1 - 0.95, df))
[1] 0.4117425
> 
> # Upper confidence limit but using a bootstrap approximation
> # to the sampling distribution, instead of a N(0,1)
> set.seed(123)
> duppSorensen(cont_atlas.sanger_BP4, boot = TRUE)
[1] 0.4124639
attr(,"eff.nboot")
[1] 10000
> 
> # Some computations on diverse data structures:
> badConti <- as.table(matrix(c(501, 27, 36, 12, 43, 15, 0, 0, 0),
+                             nrow = 3, ncol = 3,
+                             dimnames = list(c("a1","a2","a3"),
+                                             c("b1", "b2","b3"))))
> tryCatch(nice2x2Table(badConti), error = function(e) {return(e)})
<simpleError in nice2x2Table.table(badConti): Not a 2x2 table>
> 
> incompleteConti <- badConti[1,1:min(2,ncol(badConti)), drop = FALSE]
> incompleteConti
    b1  b2
a1 501  12
> tryCatch(nice2x2Table(incompleteConti), error = function(e) {return(e)})
<simpleError in nice2x2Table.table(incompleteConti): Not a 2x2 table>
> 
> contiAsVector <- c(32, 21, 81, 1439)
> nice2x2Table(contiAsVector)
[1] TRUE
> contiAsVector.mat <- matrix(contiAsVector, nrow = 2)
> contiAsVector.mat
     [,1] [,2]
[1,]   32   81
[2,]   21 1439
> contiAsVectorLen3 <- c(32, 21, 81)
> nice2x2Table(contiAsVectorLen3)
[1] TRUE
> 
> tryCatch(dSorensen(badConti), error = function(e) {return(e)})
<simpleError in nice2x2Table.table(x): Not a 2x2 table>
> 
> # Apparently, the next order works fine, but returns a wrong value!
> dSorensen(badConti, check.table = FALSE)
[1] 0.05915493
> 
> tryCatch(dSorensen(incompleteConti), error = function(e) {return(e)})
<simpleError in nice2x2Table.table(x): Not a 2x2 table>
> dSorensen(contiAsVector)
[1] 0.6144578
> dSorensen(contiAsVector.mat)
[1] 0.6144578
> dSorensen(contiAsVectorLen3)
[1] 0.6144578
> dSorensen(contiAsVectorLen3, check.table = FALSE)
[1] 0.6144578
> dSorensen(c(0,0,0,45))
[1] NaN
> 
> tryCatch(seSorensen(badConti), error = function(e) {return(e)})
<simpleError in nice2x2Table.table(x): Not a 2x2 table>
> tryCatch(seSorensen(incompleteConti), error = function(e) {return(e)})
<simpleError in nice2x2Table.table(x): Not a 2x2 table>
> seSorensen(contiAsVector)
[1] 0.04818012
> seSorensen(contiAsVector.mat)
[1] 0.04818012
> seSorensen(contiAsVectorLen3)
[1] 0.04818012
> seSorensen(contiAsVectorLen3, check.table = FALSE)
[1] 0.04818012
> tryCatch(seSorensen(contiAsVectorLen3, check.table = "not"), error = function(e) {return(e)})
<simpleError in seSorensen.numeric(contiAsVectorLen3, check.table = "not"): Argument 'check.table' must be logical>
> seSorensen(c(0,0,0,45))
[1] NaN
> 
> tryCatch(duppSorensen(badConti), error = function(e) {return(e)})
<simpleError in nice2x2Table.table(x): Not a 2x2 table>
> tryCatch(duppSorensen(incompleteConti), error = function(e) {return(e)})
<simpleError in nice2x2Table.table(x): Not a 2x2 table>
> duppSorensen(contiAsVector)
[1] 0.6937071
> duppSorensen(contiAsVector.mat)
[1] 0.6937071
> set.seed(123)
> duppSorensen(contiAsVector, boot = TRUE)
[1] 0.6922658
attr(,"eff.nboot")
[1] 10000
> set.seed(123)
> duppSorensen(contiAsVector.mat, boot = TRUE)
[1] 0.6922658
attr(,"eff.nboot")
[1] 10000
> duppSorensen(contiAsVectorLen3)
[1] 0.6937071
> # Bootstrapping requires full contingency tables (4 values)
> set.seed(123)
> tryCatch(duppSorensen(contiAsVectorLen3, boot = TRUE), error = function(e) {return(e)})
<simpleError in duppSorensen.numeric(contiAsVectorLen3, boot = TRUE): Bootstraping requires a numeric vector of 4 frequencies>
> duppSorensen(c(0,0,0,45))
[1] NaN
> 
> # Equivalence test, H0: d >= d0 vs  H1: d < d0 (d0 = 0.4444)
> ?equivTestSorensen
equivTestSorensen          package:goSorensen          R Documentation

_E_q_u_i_v_a_l_e_n_c_e _t_e_s_t _b_a_s_e_d _o_n _t_h_e _S_o_r_e_n_s_e_n-_D_i_c_e _d_i_s_s_i_m_i_l_a_r_i_t_y

_D_e_s_c_r_i_p_t_i_o_n:

     Equivalence test based on the Sorensen-Dice dissimilarity,
     computed either by an asymptotic normal approach or by a bootstrap
     approach.

_U_s_a_g_e:

     equivTestSorensen(x, ...)
     
     ## S3 method for class 'table'
     equivTestSorensen(
       x,
       d0 = 1/(1 + 1.25),
       conf.level = 0.95,
       boot = FALSE,
       nboot = 10000,
       check.table = TRUE,
       ...
     )
     
     ## S3 method for class 'matrix'
     equivTestSorensen(
       x,
       d0 = 1/(1 + 1.25),
       conf.level = 0.95,
       boot = FALSE,
       nboot = 10000,
       check.table = TRUE,
       ...
     )
     
     ## S3 method for class 'numeric'
     equivTestSorensen(
       x,
       d0 = 1/(1 + 1.25),
       conf.level = 0.95,
       boot = FALSE,
       nboot = 10000,
       check.table = TRUE,
       ...
     )
     
     ## S3 method for class 'character'
     equivTestSorensen(
       x,
       y,
       d0 = 1/(1 + 1.25),
       conf.level = 0.95,
       boot = FALSE,
       nboot = 10000,
       check.table = TRUE,
       ...
     )
     
     ## S3 method for class 'list'
     equivTestSorensen(
       x,
       d0 = 1/(1 + 1.25),
       conf.level = 0.95,
       boot = FALSE,
       nboot = 10000,
       check.table = TRUE,
       ...
     )
     
     ## S3 method for class 'tableList'
     equivTestSorensen(
       x,
       d0 = 1/(1 + 1.25),
       conf.level = 0.95,
       boot = FALSE,
       nboot = 10000,
       check.table = TRUE,
       ...
     )
     
_A_r_g_u_m_e_n_t_s:

       x: either an object of class "table", "matrix", "numeric",
          "character", "list" or "tableList". See the details section
          for more information.

     ...: extra parameters for function 'buildEnrichTable'.

      d0: equivalence threshold for the Sorensen-Dice dissimilarity, d.
          The null hypothesis states that d >= d0, i.e., inequivalence
          between the compared gene lists and the alternative that d <
          d0, i.e., equivalence or dissimilarity irrelevance (up to a
          level d0).

conf.level: confidence level of the one-sided confidence interval, a
          value between 0 and 1.

    boot: boolean. If TRUE, the confidence interval and the test
          p-value are computed by means of a bootstrap approach instead
          of the asymptotic normal approach. Defaults to FALSE.

   nboot: numeric, number of initially planned bootstrap replicates.
          Ignored if 'boot == FALSE'. Defaults to 10000.

check.table: Boolean. If TRUE (default), argument 'x' is checked to
          adequately represent a 2x2 contingency table (or an aggregate
          of them) or gene lists producing a correct table. This
          checking is performed by means of function 'nice2x2Table'.

       y: an object of class "character" representing a list of gene
          identifiers (e.g., ENTREZ).

_D_e_t_a_i_l_s:

     This function computes either the normal asymptotic or the
     bootstrap equivalence test based on the Sorensen-Dice
     dissimilarity, given a 2x2 arrangement of frequencies (either
     implemented as a "table", a "matrix" or a "numeric" object):

       n_{11}   n_{10} 
       n_{01}  n_{00}, 
      
     The subindex '11' corresponds to those GO terms enriched in both
     lists, '01' to terms enriched in the second list but not in the
     first one, '10' to terms enriched in the first list but not
     enriched in the second one and '00' corresponds to those GO terms
     non enriched in both gene lists, i.e., to the double negatives, a
     value which is ignored in the computations.

     In the "numeric" interface, if 'length(x) >= 4', the values are
     interpreted as (n_11, n_01, n_10, n_00), always in this order and
     discarding extra values if necessary.

     If 'x' is an object of class "character", then 'x' (and 'y') must
     represent two "character" vectors of valid gene identifiers (e.g.,
     ENTREZ). Then the equivalence test is performed between 'x' and
     'y', after internally summarizing them as a 2x2 contingency table
     of joint enrichment. This last operation is performed by function
     'buildEnrichTable' and "valid gene identifiers (e.g., ENTREZ)"
     stands for the coherency of these gene identifiers with the
     arguments 'geneUniverse' and 'orgPackg' of 'buildEnrichTable',
     passed by the ellipsis argument '...' in 'equivTestSorensen'.

     If 'x' is an object of class "list", each of its elements must be
     a "character" vector of gene identifiers (e.g., ENTREZ). Then all
     pairwise equivalence tests are performed between these gene lists.

     Class "tableList" corresponds to objects representing all mutual
     enrichment contingency tables generated in a pairwise fashion:
     Given gene lists l1, l2, ..., lk, an object of class "tableList"
     (typically constructed by a call to function 'buildEnrichTable')
     is a list of lists of contingency tables tij generated from each
     pair of gene lists i and j, with the following structure:

     $l2

     $l2$l1$t21

     $l3

     $l3$l1$t31, $l3$l2$t32

     ...

     $lk$l1$tk1, $lk$l2$tk2, ..., $lk$l(k-1)tk(k-1)

     If 'x' is an object of class "tableList", the test is performed
     over each one of these tables.

     The test is based on the fact that the studentized statistic (^d -
     d) / ^se is approximately distributed as a standard normal. ^d
     stands for the sample Sorensen-Dice dissimilarity, d for its true
     (unknown) value and ^se for the estimate of its standard error.
     This result is asymptotically correct, but the true distribution
     of the studentized statistic is not exactly normal for finite
     samples, with a heavier left tail than expected under the Gaussian
     model, which may produce some type I error inflation. The
     bootstrap method provides a better approximation to this
     distribution. In the bootstrap approach, 'nboot' new bootstrap
     contingency tables are generated from a multinomial distribution
     with parameters 'size =' (n11 + n01 + n10 + n00) and probabilities
     %. Sometimes, some of these generated tables may present so low
     frequencies of enrichment that make them unable for Sorensen-Dice
     computations. As a consequence, the number of effective bootstrap
     samples may be lower than the number of initially planned ones,
     'nboot', but our simulation studies concluded that this makes the
     test more conservative, less prone to reject a truly false null
     hypothesis of inequivalence, but in any case protects from
     inflating the type I error.

     In a bootstrap test result, use 'getNboot' to access the number of
     initially planned bootstrap replicates and 'getEffNboot' to access
     the number of finally effective bootstrap replicates.

_V_a_l_u_e:

     For all interfaces (except for the "list" and "tableList"
     interfaces) the result is a list of class "equivSDhtest" which
     inherits from "htest", with the following components:

     statistic the value of the studentized statistic (dSorensen(x) -
          d0) / seSorensen(x)

     p.value the p-value of the test

     conf.int the one-sided confidence interval (0, dUpp]

     estimate the Sorensen dissimilarity estimate, dSorensen(x)

     null.value the value of d0

     stderr the standard error of the Sorensen dissimilarity estimate,
          seSorensen(x), used as denominator in the studentized
          statistic

     alternative a character string describing the alternative
          hypothesis

     method a character string describing the test

     data.name a character string giving the names of the data

     enrichTab the 2x2 contingency table of joint enrichment whereby
          the test was based

     For the "list" and "tableList" interfaces, the result is an
     "equivSDhtestList", a list of objects with all pairwise
     comparisons, each one being an object of "equivSDhtest" class.

_M_e_t_h_o_d_s (_b_y _c_l_a_s_s):

        • 'equivTestSorensen(table)': S3 method for class "table"

        • 'equivTestSorensen(matrix)': S3 method for class "matrix"

        • 'equivTestSorensen(numeric)': S3 method for class "numeric"

        • 'equivTestSorensen(character)': S3 method for class
          "character"

        • 'equivTestSorensen(list)': S3 method for class "list"

        • 'equivTestSorensen(tableList)': S3 method for class
          "tableList"

_S_e_e _A_l_s_o:

     'nice2x2Table' for checking and reformatting data, 'dSorensen' for
     computing the Sorensen-Dice dissimilarity, 'seSorensen' for
     computing the standard error of the dissimilarity, 'duppSorensen'
     for the upper limit of a one-sided confidence interval of the
     dissimilarity. 'getTable', 'getPvalue', 'getUpper', 'getSE',
     'getNboot' and 'getEffNboot' for accessing specific fields in the
     result of these testing functions. 'update' for updating the
     result of these testing functions with alternative equivalence
     limits, confidence levels or to convert a normal result in a
     bootstrap result or the reverse.

_E_x_a_m_p_l_e_s:

     # Gene lists 'atlas' and 'sanger' in 'allOncoGeneLists' dataset. Table of joint enrichment
     # of GO terms in ontology BP at level 4.
     data(cont_atlas.sanger_BP4)
     cont_atlas.sanger_BP4
     equivTestSorensen(cont_atlas.sanger_BP4)
     # Bootstrap test:
     equivTestSorensen(cont_atlas.sanger_BP4, boot = TRUE)
     
     # Equivalence tests from scratch, directly from gene lists:
     # (These examples may be considerably time consuming due to many enrichment
     # tests to build the contingency tables of mutual enrichment)
     # data(allOncoGeneLists)
     # ?allOncoGeneLists
     
     # Obtaining ENTREZ identifiers for the gene universe of humans:
     library(org.Hs.eg.db)
     humanEntrezIDs <- keys(org.Hs.eg.db, keytype = "ENTREZID")
     
     # Computing the equivalence test:
     # equivTestSorensen(allOncoGeneLists$atlas, allOncoGeneLists$sanger,
     #                   geneUniverse = humanEntrezIDs, orgPackg = "org.Hs.eg.db",
     #                   onto = "BP", GOLevel = 4)
     # Bootstrap instead of normal approximation test:
     # equivTestSorensen(allOncoGeneLists$atlas, allOncoGeneLists$sanger,
     #                   geneUniverse = humanEntrezIDs, orgPackg = "org.Hs.eg.db",
     #                   onto = "BP", GOLevel = 4,
     #                   boot = TRUE)
     
     # Essentially, the above code makes:
     # ccont_atlas.sanger_BP4 <- buildEnrichTable(allOncoGeneLists$atlas, allOncoGeneLists$sanger,
     #                                   geneUniverse = humanEntrezIDs, orgPackg = "org.Hs.eg.db",
     #                                   onto = "BP", GOLevel = 4)
     # ccont_atlas.sanger_BP4
     # equivTestSorensen(ccont_atlas.sanger_BP4)
     # equivTestSorensen(ccont_atlas.sanger_BP4, boot = TRUE)
     # (Note that building first the contingency table may be advantageous to save time!)
     # The object cont_atlas.sanger_BP4 and ccont_atlas.sanger_BP4 are exactly the same
     
     # All pairwise equivalence tests:
     # equivTestSorensen(allOncoGeneLists,
     #                   geneUniverse = humanEntrezIDs, orgPackg = "org.Hs.eg.db",
     #                   onto = "BP", GOLevel = 4)
     
     
     # Equivalence test on a contingency table represented as a numeric vector:
     equivTestSorensen(c(56, 1, 30, 47))
     equivTestSorensen(c(56, 1, 30, 47), boot = TRUE)
     equivTestSorensen(c(56, 1, 30))
     # Error: all frequencies are needed for bootstrap:
     try(equivTestSorensen(c(56, 1, 30), boot = TRUE), TRUE)
     

> equiv.atlas.sanger <- equivTestSorensen(cont_atlas.sanger_BP4)
> equiv.atlas.sanger

	Normal asymptotic test for 2x2 contingency tables based on the
	Sorensen-Dice dissimilarity

data:  cont_atlas.sanger_BP4
(d - d0) / se = -3.1077, p-value = 0.0009429
alternative hypothesis: true equivalence limit d0 is less than 0.4444444
95 percent confidence interval:
 0.0000000 0.4116647
sample estimates:
Sorensen dissimilarity 
             0.3748056 
attr(,"se")
standard error 
    0.02240875 

> getTable(equiv.atlas.sanger)
                 Enriched in sanger
Enriched in atlas TRUE FALSE
            TRUE   201   212
            FALSE   29  3465
> getPvalue(equiv.atlas.sanger)
     p-value 
0.0009428632 
> 
> tryCatch(equivTestSorensen(badConti), error = function(e) {return(e)})
<simpleError in nice2x2Table.table(x): Not a 2x2 table>
> tryCatch(equivTestSorensen(incompleteConti), error = function(e) {return(e)})
<simpleError in nice2x2Table.table(x): Not a 2x2 table>
> equivTestSorensen(contiAsVector)

	Normal asymptotic test for 2x2 contingency tables based on the
	Sorensen-Dice dissimilarity

data:  contiAsVector
(d - d0) / se = 3.5287, p-value = 0.9998
alternative hypothesis: true equivalence limit d0 is less than 0.4444444
95 percent confidence interval:
 0.0000000 0.6937071
sample estimates:
Sorensen dissimilarity 
             0.6144578 
attr(,"se")
standard error 
    0.04818012 

> equivTestSorensen(contiAsVector.mat)

	Normal asymptotic test for 2x2 contingency tables based on the
	Sorensen-Dice dissimilarity

data:  contiAsVector.mat
(d - d0) / se = 3.5287, p-value = 0.9998
alternative hypothesis: true equivalence limit d0 is less than 0.4444444
95 percent confidence interval:
 0.0000000 0.6937071
sample estimates:
Sorensen dissimilarity 
             0.6144578 
attr(,"se")
standard error 
    0.04818012 

> set.seed(123)
> equivTestSorensen(contiAsVector.mat, boot = TRUE)

	Bootstrap test for 2x2 contingency tables based on the Sorensen-Dice
	dissimilarity (10000 bootstrap replicates)

data:  contiAsVector.mat
(d - d0) / se = 3.5287, p-value = 0.9996
alternative hypothesis: true equivalence limit d0 is less than 0.4444444
95 percent confidence interval:
 0.0000000 0.6922658
sample estimates:
Sorensen dissimilarity 
             0.6144578 
attr(,"se")
standard error 
    0.04818012 

> equivTestSorensen(contiAsVectorLen3)

	Normal asymptotic test for 2x2 contingency tables based on the
	Sorensen-Dice dissimilarity

data:  contiAsVectorLen3
(d - d0) / se = 3.5287, p-value = 0.9998
alternative hypothesis: true equivalence limit d0 is less than 0.4444444
95 percent confidence interval:
 0.0000000 0.6937071
sample estimates:
Sorensen dissimilarity 
             0.6144578 
attr(,"se")
standard error 
    0.04818012 

> 
> tryCatch(equivTestSorensen(contiAsVectorLen3, boot = TRUE), error = function(e) {return(e)})
<simpleError in equivTestSorensen.numeric(contiAsVectorLen3, boot = TRUE): Bootstraping requires a numeric vector of 4 frequencies>
> 
> equivTestSorensen(c(0,0,0,45))

	No test performed due non finite (d - d0) / se statistic

data:  c(0, 0, 0, 45)
(d - d0) / se = NaN, p-value = NA
alternative hypothesis: true equivalence limit d0 is less than 0.4444444
95 percent confidence interval:
   0 NaN
sample estimates:
Sorensen dissimilarity 
                   NaN 
attr(,"se")
standard error 
           NaN 

> 
> # Sorensen-Dice computations from scratch, directly from gene lists
> data(allOncoGeneLists)
> ?allOncoGeneLists
allOncoGeneLists          package:goSorensen           R Documentation

_7 _g_e_n_e _l_i_s_t_s _p_o_s_s_i_b_l_y _r_e_l_a_t_e_d _w_i_t_h _c_a_n_c_e_r

_D_e_s_c_r_i_p_t_i_o_n:

     An object of class "list" of length 7. Each one of its elements is
     a "character" vector of gene identifiers (e.g., ENTREZ). Only gene
     lists of length almost 100 were taken from their source web. Take
     these lists just as an illustrative example, they are not
     automatically updated.

_U_s_a_g_e:

     data(allOncoGeneLists)
     
_F_o_r_m_a_t:

     An object of class "list" of length 7. Each one of its elements is
     a "character" vector of ENTREZ gene identifiers .

_S_o_u_r_c_e:

     <http://www.bushmanlab.org/links/genelists>


> 
> library(org.Hs.eg.db)
Loading required package: AnnotationDbi
Loading required package: stats4
Loading required package: BiocGenerics
Loading required package: generics

Attaching package: 'generics'

The following objects are masked from 'package:base':

    as.difftime, as.factor, as.ordered, intersect, is.element, setdiff,
    setequal, union


Attaching package: 'BiocGenerics'

The following objects are masked from 'package:stats':

    IQR, mad, sd, var, xtabs

The following objects are masked from 'package:base':

    Filter, Find, Map, Position, Reduce, anyDuplicated, aperm, append,
    as.data.frame, basename, cbind, colnames, dirname, do.call,
    duplicated, eval, evalq, get, grep, grepl, is.unsorted, lapply,
    mapply, match, mget, order, paste, pmax, pmax.int, pmin, pmin.int,
    rank, rbind, rownames, sapply, saveRDS, table, tapply, unique,
    unsplit, which.max, which.min

Loading required package: Biobase
Welcome to Bioconductor

    Vignettes contain introductory material; view with
    'browseVignettes()'. To cite Bioconductor, see
    'citation("Biobase")', and for packages 'citation("pkgname")'.

Loading required package: IRanges
Loading required package: S4Vectors

Attaching package: 'S4Vectors'

The following object is masked from 'package:utils':

    findMatches

The following objects are masked from 'package:base':

    I, expand.grid, unname

> humanEntrezIDs <- keys(org.Hs.eg.db, keytype = "ENTREZID")
> # First, the mutual GO node enrichment tables are built, then computations
> # proceed from these contingency tables.
> # Building the contingency tables is a slow process (many enrichment tests)
> normTest <- equivTestSorensen(allOncoGeneLists[["atlas"]], allOncoGeneLists[["sanger"]],
+                               listNames = c("atlas", "sanger"),
+                               onto = "BP", GOLevel = 5,
+                               geneUniverse = humanEntrezIDs, orgPackg = "org.Hs.eg.db")

> normTest

	Normal asymptotic test for 2x2 contingency tables based on the
	Sorensen-Dice dissimilarity

data:  tab
(d - d0) / se = -8.5125, p-value < 2.2e-16
alternative hypothesis: true equivalence limit d0 is less than 0.4444444
95 percent confidence interval:
 0.0000000 0.3482836
sample estimates:
Sorensen dissimilarity 
             0.3252525 
attr(,"se")
standard error 
    0.01400193 

> 
> # To perform a bootstrap test from scratch would be even slower:
> # set.seed(123)
> # bootTest <- equivTestSorensen(allOncoGeneLists[["atlas"]], allOncoGeneLists[["sanger"]],
> #                               listNames = c("atlas", "sanger"),
> #                               boot = TRUE,
> #                               onto = "BP", GOLevel = 5,
> #                               geneUniverse = humanEntrezIDs, orgPackg = "org.Hs.eg.db")
> # bootTest
> 
> # It is much faster to upgrade 'normTest' to be a bootstrap test:
> set.seed(123)
> bootTest <- upgrade(normTest, boot = TRUE)
> bootTest

	Bootstrap test for 2x2 contingency tables based on the Sorensen-Dice
	dissimilarity (10000 bootstrap replicates)

data:  tab
(d - d0) / se = -8.5125, p-value = 9.999e-05
alternative hypothesis: true equivalence limit d0 is less than 0.4444444
95 percent confidence interval:
 0.0000000 0.3484472
sample estimates:
Sorensen dissimilarity 
             0.3252525 
attr(,"se")
standard error 
    0.01400193 

> # To know the number of planned bootstrap replicates:
> getNboot(bootTest)
[1] 10000
> # To know the number of valid bootstrap replicates:
> getEffNboot(bootTest)
[1] 10000
> 
> # There are similar methods for dSorensen, seSorensen, duppSorensen, etc. to
> # compute directly from a pair of gene lists.
> # They are quite slow for the same reason as before (many enrichment tests).
> # dSorensen(allOncoGeneLists[["atlas"]], allOncoGeneLists[["sanger"]],
> #           listNames = c("atlas", "sanger"),
> #           onto = "BP", GOLevel = 5,
> #           geneUniverse = humanEntrezIDs, orgPackg = "org.Hs.eg.db")
> # seSorensen(allOncoGeneLists[["atlas"]], allOncoGeneLists[["sanger"]],
> #            listNames = c("atlas", "sanger"),
> #            onto = "BP", GOLevel = 5,
> #            geneUniverse = humanEntrezIDs, orgPackg = "org.Hs.eg.db")
> #
> # duppSorensen(allOncoGeneLists[["atlas"]], allOncoGeneLists[["sanger"]],
> #              listNames = c("atlas", "sanger"),
> #              onto = "BP", GOLevel = 5,
> #              geneUniverse = humanEntrezIDs, orgPackg = "org.Hs.eg.db")
> #
> # set.seed(123)
> # duppSorensen(allOncoGeneLists[["atlas"]], allOncoGeneLists[["sanger"]],
> #              boot = TRUE,
> #              listNames = c("atlas", "sanger"),
> #              onto = "BP", GOLevel = 5,
> #              geneUniverse = humanEntrezIDs, orgPackg = "org.Hs.eg.db")
> # etc.
> 
> # To build the contingency table first and then compute from it, may be a more flexible
> # and saving time strategy, in general:
> ?buildEnrichTable
buildEnrichTable          package:goSorensen           R Documentation

_C_r_e_a_t_e_s _a _2_x_2 _e_n_r_i_c_h_m_e_n_t _c_o_n_t_i_n_g_e_n_c_y _t_a_b_l_e _f_r_o_m _t_w_o _g_e_n_e _l_i_s_t_s, _o_r _a_l_l
_p_a_i_r_w_i_s_e _c_o_n_t_i_n_g_e_n_c_y _t_a_b_l_e_s _f_o_r _a "_l_i_s_t" _o_f _g_e_n_e _l_i_s_t_s.

_D_e_s_c_r_i_p_t_i_o_n:

     Creates a 2x2 enrichment contingency table from two gene lists, or
     all pairwise contingency tables for a "list" of gene lists.

_U_s_a_g_e:

     buildEnrichTable(x, ...)
     
     ## Default S3 method:
     buildEnrichTable(
       x,
       y,
       listNames = c("gene.list1", "gene.list2"),
       check.table = TRUE,
       geneUniverse,
       orgPackg,
       onto,
       GOLevel,
       storeEnrichedIn = TRUE,
       pAdjustMeth = "BH",
       pvalCutoff = 0.01,
       qvalCutoff = 0.05,
       parallel = FALSE,
       nOfCores = 1,
       ...
     )
     
     ## S3 method for class 'character'
     buildEnrichTable(
       x,
       y,
       listNames = c("gene.list1", "gene.list2"),
       check.table = TRUE,
       geneUniverse,
       orgPackg,
       onto,
       GOLevel,
       storeEnrichedIn = TRUE,
       pAdjustMeth = "BH",
       pvalCutoff = 0.01,
       qvalCutoff = 0.05,
       parallel = FALSE,
       nOfCores = min(detectCores() - 1),
       ...
     )
     
     ## S3 method for class 'list'
     buildEnrichTable(
       x,
       check.table = TRUE,
       geneUniverse,
       orgPackg,
       onto,
       GOLevel,
       storeEnrichedIn = TRUE,
       pAdjustMeth = "BH",
       pvalCutoff = 0.01,
       qvalCutoff = 0.05,
       parallel = FALSE,
       nOfCores = min(detectCores() - 1, length(x) - 1),
       ...
     )
     
_A_r_g_u_m_e_n_t_s:

       x: either an object of class "character" (or coerzable to
          "character") representing a vector of gene identifiers (e.g.,
          ENTREZ) or an object of class "list". In this second case,
          each element of the list must be a "character" vector of gene
          identifiers (e.g., ENTREZ). Then, all pairwise contingency
          tables between these gene lists are built.

     ...: Additional parameters for internal use (not used for the
          moment)

       y: an object of class "character" (or coerzable to "character")
          representing a vector of gene identifiers (e.g., ENTREZ).

listNames: a character(2) with the gene lists names originating the
          cross-tabulated enrichment frequencies. Only in the
          "character" or default interface.

check.table: Logical The resulting table must be checked. Defaults to
          TRUE.

geneUniverse: character vector containing the universe of genes from
          where gene lists have been extracted. This vector must be
          obtained from the annotation package declared in 'orgPackg'.
          For more details, refer to vignette goSorensen_Introduction.

orgPackg: A string with the name of the genomic annotation package
          corresponding to a specific species to be analyzed, which
          must be previously installed and activated. For more details,
          refer to vignette goSorensen_Introduction.

    onto: string describing the ontology. Either "BP", "MF" or "CC".

 GOLevel: An integer, the GO ontology level.

storeEnrichedIn: logical, the matrix of enriched (GO terms) x (gene
          lists) TRUE/FALSE values, must be stored in the result? See
          the details section

pAdjustMeth: string describing the adjust method, either "BH", "BY" or
          "Bonf", defaults to 'BH'.

pvalCutoff: adjusted pvalue cutoff on enrichment tests to report

qvalCutoff: qvalue cutoff on enrichment tests to report as significant.
          Tests must pass i) pvalueCutoff on unadjusted pvalues, ii)
          pvalueCutoff on adjusted pvalues and iii) qvalueCutoff on
          qvalues to be reported

parallel: Logical. Defaults to FALSE but put it at TRUE for parallel
          computation.

nOfCores: Number of cores for parallel computations. Only in "list"
          interface.

_D_e_t_a_i_l_s:

     If the argument 'storeEnrichedIn' is TRUE (the default value), the
     result of 'buildEnrichTable' includes an additional attribute
     'enriched' with a matrix of TRUE/FALSE values. Each of its rows
     indicates if a given GO term is enriched or not in each one of the
     gene lists (columns). To save space, only GO terms enriched in
     almost one of the gene lists are included in this matrix.

     Also to avoid redundancies and to save space, the result of
     'buildEnrichTable.list' (an object of class "tableList", which is
     itself an aggregate of 2x2 contingency tables of class "table")
     has the attribute 'enriched' but its table members do not have
     this attribute.

     The default value of argument 'parallel' ís "FALSE" and you may
     consider the trade of between the time spent in initializing
     parallelization and the possible time gain when parallelizing. It
     is difficult to establish a general guideline, but parallelizing
     is only worthwhile when analyzing many gene lists, on the order of
     30 or more, although it depends on each computer and each
     application.

_V_a_l_u_e:

     in the "character" interface, an object of class "table". It
     represents a 2x2 contingency table, the cross-tabulation of the
     enriched GO terms in two gene lists: "Number of enriched GO terms
     in list 1 (TRUE, FALSE)" x "Number of enriched Go terms in list 2
     (TRUE, FALSE)". In the "list" interface, the result is an object
     of class "tableList" with all pairwise tables. Class "tableList"
     corresponds to objects representing all mutual enrichment
     contingency tables generated in a pairwise fashion: Given gene
     lists (i.e. "character" vectors of gene identifiers) l1, l2, ...,
     lk, an object of class "tableList" is a list of lists of
     contingency tables t(i,j) generated from each pair of gene lists i
     and j, with the following structure:

     $l2

     $l2$l1$t(2,1)

     $l3

     $l3$l1$t(3,1), $l3$l2$t(3,2)

     ...

     $lk

     $lk$l1$t(k,1), $lk$l2$t(k,2), ..., $lk$l(k-1)t(K,k-1)

_M_e_t_h_o_d_s (_b_y _c_l_a_s_s):

        • 'buildEnrichTable(default)': S3 default method

        • 'buildEnrichTable(character)': S3 method for class
          "character"

        • 'buildEnrichTable(list)': S3 method for class "list"

_E_x_a_m_p_l_e_s:

     # Obtaining ENTREZ identifiers for the gene universe of humans:
     library(org.Hs.eg.db)
     humanEntrezIDs <- keys(org.Hs.eg.db, keytype = "ENTREZID")
     
     # Gene lists to be explored for enrichment:
     data(allOncoGeneLists)
     ?allOncoGeneLists
     
     # Table of joint GO term enrichment between gene lists Vogelstein and sanger,
     # for ontology MF at GO level 6.
     vog.VS.sang <- buildEnrichTable(allOncoGeneLists[["Vogelstein"]],
                                     allOncoGeneLists[["sanger"]],
                                     geneUniverse = humanEntrezIDs, orgPackg = "org.Hs.eg.db",
                                     onto = "MF", GOLevel = 6, listNames = c("Vogelstein", "sanger"))
     vog.VS.sang
     attr(vog.VS.sang, "enriched")
     
     # All tables of mutual enrichment:
     all.tabs <- buildEnrichTable(allOncoGeneLists,
                                  geneUniverse = humanEntrezIDs, orgPackg = "org.Hs.eg.db",
                                  onto = "MF", GOLevel = 6)
     attr(all.tabs, "enriched")
     all.tabs$waldman
     all.tabs$waldman$atlas
     attr(all.tabs$waldman$atlas, "enriched")
     

> tab <- buildEnrichTable(allOncoGeneLists[["atlas"]], allOncoGeneLists[["sanger"]],
+                         listNames = c("atlas", "sanger"),
+                         onto = "BP", GOLevel = 5,
+                         geneUniverse = humanEntrezIDs, orgPackg = "org.Hs.eg.db")
> 
> tab
                 Enriched in sanger
Enriched in atlas TRUE FALSE
            TRUE   501   429
            FALSE   54  8085
> 
> # (Here, an obvious faster possibility would be to recover the enrichment contingency
> # table from the previous normal test result:)
> tab <- getTable(normTest)
> tab
                 Enriched in sanger
Enriched in atlas TRUE FALSE
            TRUE   501   429
            FALSE   54  8085
> 
> tst <- equivTestSorensen(tab)
> tst

	Normal asymptotic test for 2x2 contingency tables based on the
	Sorensen-Dice dissimilarity

data:  tab
(d - d0) / se = -8.5125, p-value < 2.2e-16
alternative hypothesis: true equivalence limit d0 is less than 0.4444444
95 percent confidence interval:
 0.0000000 0.3482836
sample estimates:
Sorensen dissimilarity 
             0.3252525 
attr(,"se")
standard error 
    0.01400193 

> set.seed(123)
> bootTst <- equivTestSorensen(tab, boot = TRUE)
> bootTst

	Bootstrap test for 2x2 contingency tables based on the Sorensen-Dice
	dissimilarity (10000 bootstrap replicates)

data:  tab
(d - d0) / se = -8.5125, p-value = 9.999e-05
alternative hypothesis: true equivalence limit d0 is less than 0.4444444
95 percent confidence interval:
 0.0000000 0.3484472
sample estimates:
Sorensen dissimilarity 
             0.3252525 
attr(,"se")
standard error 
    0.01400193 

> 
> dSorensen(tab)
[1] 0.3252525
> seSorensen(tab)
[1] 0.01400193
> # or:
> getDissimilarity(tst)
Sorensen dissimilarity 
             0.3252525 
attr(,"se")
standard error 
    0.01400193 
> 
> duppSorensen(tab)
[1] 0.3482836
> getUpper(tst)
   dUpper 
0.3482836 
> 
> set.seed(123)
> duppSorensen(tab, boot = TRUE)
[1] 0.3484472
attr(,"eff.nboot")
[1] 10000
> getUpper(bootTst)
   dUpper 
0.3484472 
> 
> # To perform from scratch all pairwise tests (or other Sorensen-Dice computations)
> # is even much slower. For example, all pairwise...
> # Dissimilarities:
> # # allPairDiss <- dSorensen(allOncoGeneLists,
> # #                          onto = "BP", GOLevel = 5,
> # #                          geneUniverse = humanEntrezIDs, orgPackg = "org.Hs.eg.db")
> # # allPairDiss
> #
> # # Still time consuming but potentially faster: compute in parallel (more precisely,
> # # build all enrichment tables in parallel):
> # allPairDiss <- dSorensen(allOncoGeneLists,
> #                          onto = "BP", GOLevel = 4,
> #                          geneUniverse = humanEntrezIDs, orgPackg = "org.Hs.eg.db",
> #                          parallel = TRUE)
> # allPairDiss
> # # Not always parallelization results in speed-up, take into account the trade-off between
> # # parallelization initialization and possible gain in speed. For a few gene lists (like
> # # in this example, 7 lists, a negative speed-up will be the most common scenario)
> 
> # Standard errors:
> # seSorensen(allOncoGeneLists,
> #            onto = "BP", GOLevel = 5,
> #            geneUniverse = humanEntrezIDs, orgPackg = "org.Hs.eg.db")
> #
> # Upper confidence interval limits:
> # duppSorensen(allOncoGeneLists,
> #              onto = "BP", GOLevel = 5,
> #              geneUniverse = humanEntrezIDs, orgPackg = "org.Hs.eg.db")
> # All pairwise asymptotic normal tests:
> # allTests <- equivTestSorensen(allOncoGeneLists,
> #                               onto = "BP", GOLevel = 5,
> #                               geneUniverse = humanEntrezIDs, orgPackg = "org.Hs.eg.db")
> # getPvalue(allTests, simplify = FALSE)
> # getPvalue(allTests)
> # p.adjust(getPvalue(allTests), method = "holm")
> # To perform all pairwise bootstrap tests from scratch is (slightly)
> # even more time consuming:
> # set.seed(123)
> # allBootTests <- equivTestSorensen(allOncoGeneLists,
> #                                   boot = TRUE,
> #                                   onto = "BP", GOLevel = 5,
> #                                   geneUniverse = humanEntrezIDs, orgPackg = "org.Hs.eg.db")
> # Not all bootstrap replicates may conduct to finite statistics:
> # getNboot(allBootTests)
> 
> # Given the normal tests (object 'allTests'), it is much faster to upgrade
> # it to have the bootstrap tests:
> # set.seed(123)
> # allBootTests <- upgrade(allTests, boot = TRUE)
> # getPvalue(allBootTests, simplify = FALSE)
> 
> # Again, the faster and more flexible possibility may be:
> # 1) First, build all pairwise enrichment contingency tables (slow first step):
> # allTabsBP.4 <- buildEnrichTable(allOncoGeneLists,
> #                                 onto = "BP", GOLevel = 5,
> #                                 geneUniverse = humanEntrezIDs, orgPackg = "org.Hs.eg.db")
> # allTabsBP.4
> 
> # Better, directly use the dataset available at this package, goSorensen:
> data("cont_all_BP4")
> cont_all_BP4
$cangenes
$cangenes$atlas
                    Enriched in atlas
Enriched in cangenes TRUE FALSE
               TRUE     0     0
               FALSE  413  3494
attr(,"onto")
[1] "BP"
attr(,"GOLevel")
[1] 4


$cis
$cis$atlas
               Enriched in atlas
Enriched in cis TRUE FALSE
          TRUE    75     6
          FALSE  338  3488

$cis$cangenes
               Enriched in cangenes
Enriched in cis TRUE FALSE
          TRUE     0    81
          FALSE    0  3826
attr(,"onto")
[1] "BP"
attr(,"GOLevel")
[1] 4


$miscellaneous
$miscellaneous$atlas
                         Enriched in atlas
Enriched in miscellaneous TRUE FALSE
                    TRUE   191    26
                    FALSE  222  3468

$miscellaneous$cangenes
                         Enriched in cangenes
Enriched in miscellaneous TRUE FALSE
                    TRUE     0   217
                    FALSE    0  3690
attr(,"onto")
[1] "BP"
attr(,"GOLevel")
[1] 4

$miscellaneous$cis
                         Enriched in cis
Enriched in miscellaneous TRUE FALSE
                    TRUE    67   150
                    FALSE   14  3676


$sanger
$sanger$atlas
                  Enriched in atlas
Enriched in sanger TRUE FALSE
             TRUE   201    29
             FALSE  212  3465

$sanger$cangenes
                  Enriched in cangenes
Enriched in sanger TRUE FALSE
             TRUE     0   230
             FALSE    0  3677
attr(,"onto")
[1] "BP"
attr(,"GOLevel")
[1] 4

$sanger$cis
                  Enriched in cis
Enriched in sanger TRUE FALSE
             TRUE    64   166
             FALSE   17  3660

$sanger$miscellaneous
                  Enriched in miscellaneous
Enriched in sanger TRUE FALSE
             TRUE   155    75
             FALSE   62  3615


$Vogelstein
$Vogelstein$atlas
                      Enriched in atlas
Enriched in Vogelstein TRUE FALSE
                 TRUE   217    35
                 FALSE  196  3459

$Vogelstein$cangenes
                      Enriched in cangenes
Enriched in Vogelstein TRUE FALSE
                 TRUE     0   252
                 FALSE    0  3655
attr(,"onto")
[1] "BP"
attr(,"GOLevel")
[1] 4

$Vogelstein$cis
                      Enriched in cis
Enriched in Vogelstein TRUE FALSE
                 TRUE    63   189
                 FALSE   18  3637

$Vogelstein$miscellaneous
                      Enriched in miscellaneous
Enriched in Vogelstein TRUE FALSE
                 TRUE   155    97
                 FALSE   62  3593

$Vogelstein$sanger
                      Enriched in sanger
Enriched in Vogelstein TRUE FALSE
                 TRUE   213    39
                 FALSE   17  3638


$waldman
$waldman$atlas
                   Enriched in atlas
Enriched in waldman TRUE FALSE
              TRUE   255    41
              FALSE  158  3453

$waldman$cangenes
                   Enriched in cangenes
Enriched in waldman TRUE FALSE
              TRUE     0   296
              FALSE    0  3611
attr(,"onto")
[1] "BP"
attr(,"GOLevel")
[1] 4

$waldman$cis
                   Enriched in cis
Enriched in waldman TRUE FALSE
              TRUE    72   224
              FALSE    9  3602

$waldman$miscellaneous
                   Enriched in miscellaneous
Enriched in waldman TRUE FALSE
              TRUE   198    98
              FALSE   19  3592

$waldman$sanger
                   Enriched in sanger
Enriched in waldman TRUE FALSE
              TRUE   177   119
              FALSE   53  3558

$waldman$Vogelstein
                   Enriched in Vogelstein
Enriched in waldman TRUE FALSE
              TRUE   193   103
              FALSE   59  3552


attr(,"onto")
[1] "BP"
attr(,"GOLevel")
[1] 4
attr(,"class")
[1] "tableList" "list"     
attr(,"enriched")
           atlas cangenes   cis miscellaneous sanger Vogelstein waldman
GO:0001649  TRUE    FALSE  TRUE          TRUE   TRUE       TRUE    TRUE
GO:0030278  TRUE    FALSE FALSE          TRUE  FALSE       TRUE    TRUE
GO:0030279 FALSE    FALSE FALSE         FALSE  FALSE       TRUE    TRUE
GO:0030282  TRUE    FALSE FALSE         FALSE  FALSE       TRUE    TRUE
GO:0036075  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0045778 FALSE    FALSE FALSE          TRUE  FALSE      FALSE    TRUE
GO:0048755  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0060688  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0061138  TRUE    FALSE  TRUE          TRUE   TRUE       TRUE    TRUE
GO:0002263  TRUE    FALSE  TRUE          TRUE   TRUE       TRUE    TRUE
GO:0030168  TRUE    FALSE FALSE          TRUE   TRUE      FALSE    TRUE
GO:0042118 FALSE    FALSE FALSE         FALSE   TRUE       TRUE    TRUE
GO:0050866  TRUE    FALSE  TRUE          TRUE   TRUE       TRUE    TRUE
GO:0050867  TRUE    FALSE  TRUE          TRUE   TRUE       TRUE    TRUE
GO:0061900  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0072537  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0001780  TRUE    FALSE FALSE          TRUE  FALSE      FALSE   FALSE
GO:0002260  TRUE    FALSE  TRUE          TRUE   TRUE       TRUE    TRUE
GO:0001818  TRUE    FALSE FALSE         FALSE   TRUE       TRUE   FALSE
GO:0002367  TRUE    FALSE FALSE         FALSE   TRUE       TRUE    TRUE
GO:0002534  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0010573  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0032602  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0032609 FALSE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0032612  TRUE    FALSE FALSE         FALSE  FALSE       TRUE   FALSE
GO:0032613  TRUE    FALSE FALSE         FALSE   TRUE      FALSE   FALSE
GO:0032615  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0032623  TRUE    FALSE FALSE         FALSE   TRUE       TRUE   FALSE
GO:0032633 FALSE    FALSE FALSE         FALSE   TRUE       TRUE   FALSE
GO:0032635  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0071604  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0071706  TRUE    FALSE FALSE         FALSE   TRUE       TRUE   FALSE
GO:0002562  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0002566 FALSE    FALSE FALSE         FALSE   TRUE       TRUE    TRUE
GO:0016445  TRUE    FALSE FALSE         FALSE   TRUE       TRUE    TRUE
GO:0002433  TRUE    FALSE FALSE          TRUE  FALSE      FALSE    TRUE
GO:0002443  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0002697  TRUE    FALSE  TRUE          TRUE   TRUE       TRUE    TRUE
GO:0002698  TRUE    FALSE  TRUE          TRUE  FALSE      FALSE    TRUE
GO:0002699  TRUE    FALSE  TRUE         FALSE   TRUE       TRUE    TRUE
GO:0043299  TRUE    FALSE  TRUE         FALSE   TRUE       TRUE    TRUE
GO:0002218  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0034101  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0002377  TRUE    FALSE FALSE         FALSE   TRUE       TRUE    TRUE
GO:0002700  TRUE    FALSE FALSE         FALSE   TRUE       TRUE    TRUE
GO:0002701  TRUE    FALSE FALSE         FALSE  FALSE      FALSE    TRUE
GO:0002702  TRUE    FALSE FALSE         FALSE   TRUE       TRUE   FALSE
GO:0002200  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0048534  TRUE    FALSE  TRUE          TRUE   TRUE       TRUE    TRUE
GO:0002685  TRUE    FALSE  TRUE          TRUE  FALSE      FALSE    TRUE
GO:1903706  TRUE    FALSE  TRUE          TRUE   TRUE       TRUE    TRUE
GO:0002686 FALSE    FALSE FALSE          TRUE  FALSE      FALSE   FALSE
GO:0002695  TRUE    FALSE  TRUE          TRUE   TRUE       TRUE    TRUE
GO:0050777  TRUE    FALSE  TRUE          TRUE   TRUE      FALSE    TRUE
GO:0050858  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:1903707  TRUE    FALSE  TRUE          TRUE   TRUE       TRUE    TRUE
GO:0002687  TRUE    FALSE FALSE         FALSE  FALSE      FALSE    TRUE
GO:0002696  TRUE    FALSE  TRUE          TRUE   TRUE       TRUE    TRUE
GO:1903708  TRUE    FALSE  TRUE          TRUE   TRUE       TRUE    TRUE
GO:0001893  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0007281  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0007530 FALSE    FALSE FALSE         FALSE  FALSE       TRUE   FALSE
GO:0007548  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0009994 FALSE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0033327 FALSE    FALSE FALSE         FALSE  FALSE       TRUE   FALSE
GO:0035234  TRUE    FALSE FALSE         FALSE  FALSE      FALSE    TRUE
GO:0045136  TRUE    FALSE FALSE         FALSE  FALSE      FALSE    TRUE
GO:0045137  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0046697  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0048608  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0060008 FALSE    FALSE FALSE         FALSE   TRUE       TRUE   FALSE
GO:0060009 FALSE    FALSE FALSE         FALSE  FALSE       TRUE   FALSE
GO:0060512  TRUE    FALSE FALSE         FALSE  FALSE       TRUE    TRUE
GO:0060525  TRUE    FALSE FALSE         FALSE   TRUE       TRUE   FALSE
GO:0060736  TRUE    FALSE FALSE         FALSE  FALSE       TRUE    TRUE
GO:0060740  TRUE    FALSE FALSE         FALSE   TRUE       TRUE    TRUE
GO:0060742  TRUE    FALSE FALSE         FALSE   TRUE       TRUE    TRUE
GO:0003012  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0000768  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0001666  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0002931  TRUE    FALSE FALSE         FALSE  FALSE      FALSE    TRUE
GO:0006970  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0006979  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0009408  TRUE    FALSE FALSE         FALSE   TRUE       TRUE    TRUE
GO:0033555 FALSE    FALSE FALSE          TRUE  FALSE      FALSE    TRUE
GO:0034405  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0035902 FALSE    FALSE FALSE         FALSE  FALSE      FALSE    TRUE
GO:0035966  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0042594  TRUE    FALSE FALSE         FALSE   TRUE      FALSE   FALSE
GO:0055093  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0002437 FALSE    FALSE  TRUE          TRUE  FALSE      FALSE    TRUE
GO:0006959 FALSE    FALSE FALSE         FALSE  FALSE      FALSE    TRUE
GO:0042092  TRUE    FALSE  TRUE          TRUE   TRUE       TRUE    TRUE
GO:0031023 FALSE    FALSE FALSE         FALSE  FALSE      FALSE    TRUE
GO:0032886  TRUE    FALSE FALSE         FALSE  FALSE      FALSE    TRUE
GO:0045786  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0045787  TRUE    FALSE  TRUE          TRUE   TRUE       TRUE    TRUE
GO:0051321  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0007162  TRUE    FALSE  TRUE          TRUE   TRUE       TRUE    TRUE
GO:0031589  TRUE    FALSE  TRUE          TRUE   TRUE       TRUE    TRUE
GO:0033627 FALSE    FALSE FALSE         FALSE  FALSE      FALSE    TRUE
GO:0045785  TRUE    FALSE  TRUE          TRUE   TRUE       TRUE    TRUE
GO:0030010  TRUE    FALSE FALSE          TRUE  FALSE      FALSE    TRUE
GO:0032878 FALSE    FALSE FALSE         FALSE  FALSE      FALSE    TRUE
GO:0061245  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0061339  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0009755  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0009756  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0023019 FALSE    FALSE FALSE          TRUE  FALSE      FALSE    TRUE
GO:0038034  TRUE    FALSE  TRUE          TRUE   TRUE       TRUE    TRUE
GO:0008366  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0007389  TRUE    FALSE  TRUE          TRUE   TRUE       TRUE    TRUE
GO:0007566  TRUE    FALSE FALSE         FALSE  FALSE      FALSE    TRUE
GO:0009791  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0046660  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0046661  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0048736  TRUE    FALSE  TRUE          TRUE   TRUE       TRUE    TRUE
GO:0003002  TRUE    FALSE  TRUE          TRUE   TRUE       TRUE    TRUE
GO:0009798  TRUE    FALSE  TRUE          TRUE   TRUE       TRUE    TRUE
GO:0009799 FALSE    FALSE FALSE         FALSE  FALSE       TRUE   FALSE
GO:0009880  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0007611  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0032922  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0042752  TRUE    FALSE FALSE         FALSE   TRUE       TRUE   FALSE
GO:0010463  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0014009  TRUE    FALSE  TRUE          TRUE   TRUE       TRUE    TRUE
GO:0033002  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0033687  TRUE    FALSE  TRUE          TRUE   TRUE       TRUE    TRUE
GO:0035988  TRUE    FALSE FALSE         FALSE  FALSE       TRUE   FALSE
GO:0048144  TRUE    FALSE  TRUE          TRUE   TRUE       TRUE    TRUE
GO:0050673  TRUE    FALSE  TRUE          TRUE   TRUE       TRUE    TRUE
GO:0051450  TRUE    FALSE FALSE          TRUE  FALSE      FALSE    TRUE
GO:0061323  TRUE    FALSE FALSE         FALSE   TRUE       TRUE   FALSE
GO:0061351  TRUE    FALSE  TRUE          TRUE   TRUE       TRUE    TRUE
GO:0070661  TRUE    FALSE  TRUE          TRUE   TRUE       TRUE    TRUE
GO:0072089  TRUE    FALSE  TRUE          TRUE   TRUE       TRUE    TRUE
GO:0072111  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0009895  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0072526  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:1901136  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0006809  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0016051  TRUE    FALSE FALSE         FALSE  FALSE      FALSE    TRUE
GO:0032964 FALSE    FALSE FALSE         FALSE   TRUE       TRUE   FALSE
GO:0042446  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0009612  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0009649 FALSE    FALSE FALSE         FALSE   TRUE       TRUE   FALSE
GO:0032102  TRUE    FALSE FALSE          TRUE  FALSE      FALSE    TRUE
GO:0042330  TRUE    FALSE  TRUE          TRUE  FALSE      FALSE    TRUE
GO:0071496  TRUE    FALSE  TRUE         FALSE  FALSE       TRUE    TRUE
GO:0002347  TRUE    FALSE  TRUE         FALSE   TRUE      FALSE   FALSE
GO:0002833  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0071216  TRUE    FALSE  TRUE          TRUE   TRUE       TRUE    TRUE
GO:1990840  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0009266  TRUE    FALSE FALSE         FALSE   TRUE       TRUE    TRUE
GO:0009314  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0051602  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0070482  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0071214  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0001763  TRUE    FALSE  TRUE          TRUE   TRUE       TRUE    TRUE
GO:0003151  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0003179  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0003206  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0007440  TRUE    FALSE FALSE         FALSE  FALSE       TRUE   FALSE
GO:0010171 FALSE    FALSE FALSE         FALSE   TRUE       TRUE    TRUE
GO:0021575 FALSE    FALSE FALSE          TRUE  FALSE      FALSE   FALSE
GO:0021587 FALSE    FALSE FALSE          TRUE  FALSE      FALSE    TRUE
GO:0031069  TRUE    FALSE FALSE         FALSE   TRUE       TRUE    TRUE
GO:0035107  TRUE    FALSE  TRUE          TRUE   TRUE       TRUE    TRUE
GO:0048532  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0048853 FALSE    FALSE FALSE          TRUE  FALSE       TRUE    TRUE
GO:0060323  TRUE    FALSE FALSE         FALSE   TRUE       TRUE    TRUE
GO:0060325 FALSE    FALSE FALSE         FALSE  FALSE      FALSE    TRUE
GO:0060411  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0060560  TRUE    FALSE FALSE          TRUE  FALSE       TRUE    TRUE
GO:0060561  TRUE    FALSE FALSE          TRUE  FALSE      FALSE    TRUE
GO:0061383  TRUE    FALSE FALSE         FALSE  FALSE      FALSE    TRUE
GO:0071697  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0072028  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0097094 FALSE    FALSE FALSE         FALSE  FALSE      FALSE    TRUE
GO:0010713  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0045833  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0045912  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0062014  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0120163 FALSE    FALSE FALSE         FALSE   TRUE       TRUE   FALSE
GO:0032352  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0045834  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0045913  TRUE    FALSE FALSE         FALSE  FALSE       TRUE    TRUE
GO:0062013  TRUE    FALSE FALSE         FALSE  FALSE      FALSE    TRUE
GO:0120162  TRUE    FALSE FALSE         FALSE   TRUE       TRUE   FALSE
GO:1904407  TRUE    FALSE FALSE         FALSE  FALSE       TRUE   FALSE
GO:0001558  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0030307 FALSE    FALSE FALSE          TRUE  FALSE      FALSE    TRUE
GO:0030308  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0048588  TRUE    FALSE FALSE          TRUE  FALSE       TRUE    TRUE
GO:0006887  TRUE    FALSE FALSE         FALSE  FALSE      FALSE    TRUE
GO:0045056  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0046718  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0019083  TRUE    FALSE FALSE         FALSE   TRUE      FALSE   FALSE
GO:0043923 FALSE    FALSE FALSE         FALSE   TRUE      FALSE   FALSE
GO:0010712 FALSE    FALSE FALSE         FALSE  FALSE       TRUE   FALSE
GO:0032350  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0034248  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0060263  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0062012  TRUE    FALSE FALSE         FALSE  FALSE       TRUE    TRUE
GO:0080164  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0120161  TRUE    FALSE  TRUE         FALSE   TRUE       TRUE   FALSE
GO:0035019  TRUE    FALSE  TRUE          TRUE   TRUE       TRUE    TRUE
GO:0097150  TRUE    FALSE FALSE         FALSE   TRUE       TRUE   FALSE
GO:1902455  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:1902459  TRUE    FALSE FALSE         FALSE   TRUE       TRUE   FALSE
GO:2000036  TRUE    FALSE FALSE         FALSE   TRUE       TRUE   FALSE
GO:0071695  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0007051  TRUE    FALSE FALSE         FALSE   TRUE       TRUE   FALSE
GO:0007059  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0007062 FALSE    FALSE FALSE         FALSE   TRUE       TRUE   FALSE
GO:0007098  TRUE    FALSE FALSE         FALSE  FALSE      FALSE    TRUE
GO:0008608  TRUE    FALSE FALSE         FALSE  FALSE      FALSE    TRUE
GO:0010948  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0044786  TRUE    FALSE FALSE         FALSE   TRUE       TRUE    TRUE
GO:0045023  TRUE    FALSE FALSE         FALSE   TRUE       TRUE   FALSE
GO:0051304  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0051653  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0090068  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:1903046  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0022405  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0046883  TRUE    FALSE FALSE         FALSE  FALSE       TRUE    TRUE
GO:0046887  TRUE    FALSE FALSE         FALSE   TRUE       TRUE    TRUE
GO:0032970  TRUE    FALSE  TRUE          TRUE  FALSE      FALSE    TRUE
GO:0001759  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0031295  TRUE    FALSE  TRUE          TRUE   TRUE       TRUE    TRUE
GO:0099590 FALSE    FALSE FALSE         FALSE   TRUE      FALSE   FALSE
GO:0007584  TRUE    FALSE FALSE         FALSE   TRUE       TRUE    TRUE
GO:0031669  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0032107  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0051282  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:1905952  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0006403  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0040014 FALSE    FALSE FALSE          TRUE  FALSE       TRUE   FALSE
GO:0046620  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0046622  TRUE    FALSE  TRUE          TRUE   TRUE       TRUE    TRUE
GO:0060419  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0098868 FALSE    FALSE FALSE         FALSE  FALSE       TRUE    TRUE
GO:0045926  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0045927  TRUE    FALSE  TRUE          TRUE   TRUE       TRUE    TRUE
GO:0048638  TRUE    FALSE  TRUE          TRUE   TRUE       TRUE    TRUE
GO:0040013  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0050920  TRUE    FALSE  TRUE          TRUE  FALSE      FALSE    TRUE
GO:0050922 FALSE    FALSE FALSE          TRUE  FALSE      FALSE   FALSE
GO:2000146  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0050921  TRUE    FALSE  TRUE         FALSE  FALSE      FALSE    TRUE
GO:0001101  TRUE    FALSE FALSE         FALSE   TRUE       TRUE    TRUE
GO:0006935  TRUE    FALSE  TRUE          TRUE  FALSE      FALSE    TRUE
GO:0009410  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0009636 FALSE    FALSE FALSE         FALSE  FALSE      FALSE    TRUE
GO:0010038  TRUE    FALSE FALSE         FALSE  FALSE       TRUE    TRUE
GO:0035094  TRUE    FALSE FALSE          TRUE  FALSE      FALSE    TRUE
GO:0046677  TRUE    FALSE  TRUE         FALSE   TRUE       TRUE   FALSE
GO:0046683  TRUE    FALSE FALSE         FALSE  FALSE      FALSE    TRUE
GO:1902074  TRUE    FALSE FALSE         FALSE  FALSE      FALSE    TRUE
GO:0022404  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0042633  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0022602  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0044849 FALSE    FALSE FALSE         FALSE  FALSE      FALSE    TRUE
GO:0005976 FALSE    FALSE FALSE         FALSE  FALSE      FALSE    TRUE
GO:0043502 FALSE    FALSE FALSE         FALSE   TRUE      FALSE   FALSE
GO:0050435  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0043697  TRUE    FALSE FALSE         FALSE  FALSE       TRUE   FALSE
GO:0006091  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0006413  TRUE    FALSE FALSE          TRUE  FALSE      FALSE   FALSE
GO:0042180  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0072593  TRUE    FALSE FALSE          TRUE  FALSE      FALSE    TRUE
GO:0090398  TRUE    FALSE  TRUE          TRUE   TRUE       TRUE    TRUE
GO:0090399  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0006099 FALSE    FALSE FALSE         FALSE   TRUE       TRUE   FALSE
GO:0005996  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0051702  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0007565  TRUE    FALSE FALSE         FALSE  FALSE      FALSE    TRUE
GO:0043368  TRUE    FALSE  TRUE         FALSE   TRUE       TRUE   FALSE
GO:0045061 FALSE    FALSE FALSE         FALSE   TRUE       TRUE   FALSE
GO:0002274  TRUE    FALSE  TRUE          TRUE   TRUE       TRUE    TRUE
GO:0002366  TRUE    FALSE  TRUE          TRUE   TRUE       TRUE    TRUE
GO:0048640  TRUE    FALSE FALSE          TRUE  FALSE      FALSE    TRUE
GO:0048639  TRUE    FALSE  TRUE          TRUE   TRUE       TRUE    TRUE
GO:1905954  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:2000243  TRUE    FALSE FALSE          TRUE  FALSE      FALSE    TRUE
GO:0051051  TRUE    FALSE FALSE         FALSE  FALSE      FALSE    TRUE
GO:1900047  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:1905953  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:2000242  TRUE    FALSE FALSE         FALSE  FALSE      FALSE    TRUE
GO:0032388  TRUE    FALSE FALSE         FALSE   TRUE       TRUE    TRUE
GO:0034764  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0045739  TRUE    FALSE FALSE         FALSE   TRUE       TRUE   FALSE
GO:0051781  TRUE    FALSE FALSE          TRUE  FALSE      FALSE    TRUE
GO:1903532  TRUE    FALSE FALSE         FALSE  FALSE       TRUE    TRUE
GO:1903829  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:1905898  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0045738  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0051283  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:1903531 FALSE    FALSE  TRUE         FALSE  FALSE      FALSE    TRUE
GO:0060759  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0090287  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:1900076  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0070572  TRUE    FALSE FALSE         FALSE  FALSE      FALSE    TRUE
GO:1903036  TRUE    FALSE FALSE         FALSE  FALSE      FALSE    TRUE
GO:1903846  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0031348  TRUE    FALSE FALSE          TRUE  FALSE      FALSE    TRUE
GO:0060761  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0090288 FALSE    FALSE FALSE          TRUE  FALSE      FALSE   FALSE
GO:1903035  TRUE    FALSE FALSE         FALSE  FALSE      FALSE    TRUE
GO:0001832 FALSE    FALSE FALSE         FALSE   TRUE       TRUE   FALSE
GO:0035264  TRUE    FALSE FALSE         FALSE   TRUE       TRUE   FALSE
GO:0035265  TRUE    FALSE  TRUE          TRUE   TRUE       TRUE    TRUE
GO:0042246  TRUE    FALSE FALSE         FALSE  FALSE      FALSE    TRUE
GO:0055017  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0022412  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0030728  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0042698  TRUE    FALSE FALSE         FALSE  FALSE      FALSE    TRUE
GO:0060135  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0001704  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0001756  TRUE    FALSE  TRUE         FALSE   TRUE       TRUE    TRUE
GO:0001825  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0002467 FALSE    FALSE  TRUE         FALSE   TRUE       TRUE    TRUE
GO:0003188 FALSE    FALSE FALSE         FALSE  FALSE       TRUE   FALSE
GO:0003272  TRUE    FALSE FALSE          TRUE  FALSE      FALSE    TRUE
GO:0006949  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0030220  TRUE    FALSE FALSE         FALSE   TRUE       TRUE   FALSE
GO:0035148  TRUE    FALSE  TRUE          TRUE   TRUE       TRUE    TRUE
GO:0048645  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0060343  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0060788  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0060900  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0001974  TRUE    FALSE FALSE          TRUE  FALSE      FALSE    TRUE
GO:0034103  TRUE    FALSE FALSE         FALSE   TRUE       TRUE    TRUE
GO:0034104  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0046849  TRUE    FALSE FALSE         FALSE   TRUE       TRUE    TRUE
GO:0001541  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0001824  TRUE    FALSE FALSE         FALSE   TRUE       TRUE   FALSE
GO:0001942  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0002088  TRUE    FALSE FALSE          TRUE  FALSE      FALSE    TRUE
GO:0003157 FALSE    FALSE FALSE         FALSE   TRUE      FALSE   FALSE
GO:0003170  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0003205  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0003279  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0016358  TRUE    FALSE FALSE          TRUE  FALSE      FALSE   FALSE
GO:0021510  TRUE    FALSE FALSE          TRUE   TRUE       TRUE   FALSE
GO:0021516 FALSE    FALSE FALSE          TRUE  FALSE      FALSE   FALSE
GO:0021517  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0021536  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0021537  TRUE    FALSE  TRUE          TRUE   TRUE       TRUE    TRUE
GO:0021543  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0021549  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0021670 FALSE    FALSE  TRUE         FALSE  FALSE      FALSE   FALSE
GO:0021675  TRUE    FALSE FALSE         FALSE   TRUE      FALSE   FALSE
GO:0021766  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0021772  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0021794 FALSE    FALSE FALSE         FALSE  FALSE       TRUE   FALSE
GO:0021987  TRUE    FALSE FALSE          TRUE  FALSE       TRUE    TRUE
GO:0021988  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0022037  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0030900  TRUE    FALSE  TRUE          TRUE   TRUE       TRUE    TRUE
GO:0030901  TRUE    FALSE FALSE          TRUE  FALSE      FALSE    TRUE
GO:0030902  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0031018  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0031099  TRUE    FALSE  TRUE          TRUE   TRUE       TRUE    TRUE
GO:0032835  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0036302  TRUE    FALSE FALSE         FALSE  FALSE       TRUE   FALSE
GO:0048286 FALSE    FALSE FALSE         FALSE  FALSE      FALSE    TRUE
GO:0048839  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0048857  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0060021  TRUE    FALSE FALSE          TRUE  FALSE      FALSE    TRUE
GO:0060324  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0060430  TRUE    FALSE FALSE         FALSE   TRUE       TRUE   FALSE
GO:0060711  TRUE    FALSE FALSE         FALSE  FALSE       TRUE    TRUE
GO:0060749  TRUE    FALSE FALSE         FALSE  FALSE      FALSE    TRUE
GO:0061029  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0061377  TRUE    FALSE FALSE         FALSE  FALSE      FALSE    TRUE
GO:0072006  TRUE    FALSE  TRUE          TRUE   TRUE       TRUE    TRUE
GO:1902742  TRUE    FALSE FALSE         FALSE  FALSE      FALSE    TRUE
GO:1904888  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0001708  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0001709  TRUE    FALSE  TRUE          TRUE   TRUE       TRUE   FALSE
GO:0010623  TRUE    FALSE FALSE          TRUE  FALSE      FALSE    TRUE
GO:0045165  TRUE    FALSE  TRUE          TRUE   TRUE       TRUE    TRUE
GO:0048469  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0001659  TRUE    FALSE  TRUE          TRUE   TRUE       TRUE   FALSE
GO:0001894  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0048872  TRUE    FALSE  TRUE          TRUE   TRUE       TRUE    TRUE
GO:0060249  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0097009  TRUE    FALSE FALSE         FALSE  FALSE       TRUE   FALSE
GO:0140962 FALSE    FALSE FALSE         FALSE   TRUE       TRUE   FALSE
GO:0033500  TRUE    FALSE FALSE         FALSE   TRUE       TRUE    TRUE
GO:1900046  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:2000241  TRUE    FALSE FALSE          TRUE  FALSE      FALSE    TRUE
GO:0010453  TRUE    FALSE  TRUE         FALSE  FALSE      FALSE   FALSE
GO:0040034  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0045682  TRUE    FALSE FALSE         FALSE  FALSE       TRUE    TRUE
GO:0048634 FALSE    FALSE  TRUE          TRUE  FALSE      FALSE    TRUE
GO:0070570  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0090183 FALSE    FALSE FALSE         FALSE   TRUE       TRUE    TRUE
GO:1901861 FALSE    FALSE  TRUE          TRUE  FALSE      FALSE    TRUE
GO:1904748  TRUE    FALSE FALSE         FALSE  FALSE      FALSE    TRUE
GO:0031641  TRUE    FALSE FALSE         FALSE  FALSE      FALSE    TRUE
GO:0034762  TRUE    FALSE FALSE         FALSE  FALSE      FALSE    TRUE
GO:0051302  TRUE    FALSE FALSE          TRUE  FALSE      FALSE    TRUE
GO:0060353  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:1900117  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0007596  TRUE    FALSE FALSE          TRUE  FALSE      FALSE   FALSE
GO:0050819  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0002523  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0030595  TRUE    FALSE  TRUE          TRUE  FALSE      FALSE    TRUE
GO:0071674  TRUE    FALSE  TRUE          TRUE  FALSE      FALSE    TRUE
GO:0097529  TRUE    FALSE FALSE          TRUE  FALSE      FALSE    TRUE
GO:0032370  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0043270  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0045807  TRUE    FALSE FALSE          TRUE  FALSE      FALSE    TRUE
GO:0051047  TRUE    FALSE FALSE         FALSE  FALSE       TRUE    TRUE
GO:0051222  TRUE    FALSE FALSE         FALSE   TRUE       TRUE    TRUE
GO:0051048  TRUE    FALSE FALSE         FALSE  FALSE      FALSE    TRUE
GO:0048635 FALSE    FALSE FALSE         FALSE  FALSE      FALSE    TRUE
GO:0051961  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0061037 FALSE    FALSE FALSE         FALSE  FALSE      FALSE    TRUE
GO:0070168 FALSE    FALSE FALSE         FALSE  FALSE       TRUE   FALSE
GO:1901343  TRUE    FALSE FALSE          TRUE  FALSE      FALSE    TRUE
GO:1901862 FALSE    FALSE FALSE         FALSE  FALSE      FALSE    TRUE
GO:0045684  TRUE    FALSE FALSE         FALSE  FALSE       TRUE   FALSE
GO:0045830 FALSE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0048636 FALSE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0051798  TRUE    FALSE FALSE          TRUE  FALSE      FALSE    TRUE
GO:0051962  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0090184 FALSE    FALSE FALSE         FALSE  FALSE       TRUE   FALSE
GO:0110110  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:1901863 FALSE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:1904018  TRUE    FALSE FALSE          TRUE  FALSE      FALSE    TRUE
GO:1904179 FALSE    FALSE FALSE         FALSE   TRUE       TRUE   FALSE
GO:1905332  TRUE    FALSE FALSE         FALSE   TRUE       TRUE    TRUE
GO:0051656  TRUE    FALSE FALSE         FALSE   TRUE       TRUE    TRUE
GO:0051651  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0010632  TRUE    FALSE FALSE         FALSE  FALSE      FALSE    TRUE
GO:0042634 FALSE    FALSE FALSE          TRUE  FALSE      FALSE    TRUE
GO:0010718  TRUE    FALSE FALSE          TRUE  FALSE       TRUE    TRUE
GO:0045618  TRUE    FALSE FALSE         FALSE  FALSE       TRUE   FALSE
GO:0045933  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:2000833  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0010633  TRUE    FALSE FALSE         FALSE  FALSE      FALSE    TRUE
GO:0014741  TRUE    FALSE FALSE         FALSE   TRUE       TRUE   FALSE
GO:0008356  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0017145  TRUE    FALSE FALSE         FALSE   TRUE       TRUE   FALSE
GO:0051446 FALSE    FALSE FALSE         FALSE  FALSE      FALSE    TRUE
GO:0050000  TRUE    FALSE FALSE         FALSE   TRUE      FALSE   FALSE
GO:0051647  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:1990849  TRUE    FALSE FALSE         FALSE   TRUE       TRUE    TRUE
GO:0051208  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0009615  TRUE    FALSE FALSE          TRUE  FALSE      FALSE   FALSE
GO:0009620  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0104004  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0034219 FALSE    FALSE FALSE          TRUE   TRUE       TRUE   FALSE
GO:0051642  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0007204  TRUE    FALSE FALSE         FALSE  FALSE      FALSE    TRUE
GO:0008360  TRUE    FALSE FALSE          TRUE  FALSE      FALSE    TRUE
GO:0010522  TRUE    FALSE FALSE          TRUE  FALSE      FALSE    TRUE
GO:0031647  TRUE    FALSE FALSE          TRUE   TRUE       TRUE   FALSE
GO:0043114  TRUE    FALSE FALSE         FALSE  FALSE      FALSE    TRUE
GO:0050803  TRUE    FALSE FALSE         FALSE  FALSE      FALSE    TRUE
GO:0050878  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0090559  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0099072  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0099149 FALSE    FALSE FALSE          TRUE   TRUE      FALSE   FALSE
GO:0010469  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0051090  TRUE    FALSE  TRUE          TRUE   TRUE       TRUE    TRUE
GO:0051098  TRUE    FALSE FALSE          TRUE   TRUE       TRUE    TRUE
GO:0019362  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0006206  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0090132  TRUE    FALSE  TRUE          TRUE  FALSE      FALSE    TRUE
GO:0006921  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:1900119  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0019827  TRUE    FALSE  TRUE          TRUE   TRUE       TRUE    TRUE
GO:0001502 FALSE    FALSE FALSE          TRUE  FALSE      FALSE    TRUE
GO:0140353  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0030193  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0030195  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0042359  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
GO:0000212 FALSE    FALSE FALSE          TRUE   TRUE      FALSE   FALSE
GO:0044771 FALSE    FALSE FALSE         FALSE  FALSE      FALSE    TRUE
GO:0045132 FALSE    FALSE FALSE          TRUE   TRUE      FALSE   FALSE
GO:0061982  TRUE    FALSE FALSE          TRUE   TRUE      FALSE    TRUE
GO:0140013  TRUE    FALSE FALSE          TRUE   TRUE      FALSE    TRUE
GO:0106106  TRUE    FALSE  TRUE         FALSE   TRUE       TRUE   FALSE
GO:1901993 FALSE    FALSE FALSE         FALSE  FALSE      FALSE    TRUE
GO:0046209  TRUE    FALSE FALSE         FALSE  FALSE      FALSE   FALSE
attr(,"enriched")attr(,"nTerms")
[1] 3907
> class(cont_all_BP4)
[1] "tableList" "list"     
> # 2) Then perform all required computatios from these enrichment contingency tables...
> # All pairwise tests:
> allTests <- equivTestSorensen(cont_all_BP4)
> allTests
$cangenes
$cangenes$atlas

	No test performed due not finite (d - d0) / se statistic

data:  tab
(d - d0) / se = Inf, p-value = NA
alternative hypothesis: true equivalence limit d0 is less than 0.4444444
95 percent confidence interval:
   0 NaN
sample estimates:
Sorensen dissimilarity 
                     1 
attr(,"se")
standard error 
             0 



$cis
$cis$atlas

	Normal asymptotic test for 2x2 contingency tables based on the
	Sorensen-Dice dissimilarity

data:  tab
(d - d0) / se = 9.3376, p-value = 1
alternative hypothesis: true equivalence limit d0 is less than 0.4444444
95 percent confidence interval:
 0.0000000 0.7407313
sample estimates:
Sorensen dissimilarity 
             0.6963563 
attr(,"se")
standard error 
    0.02697813 


$cis$cangenes

	No test performed due not finite (d - d0) / se statistic

data:  tab
(d - d0) / se = Inf, p-value = NA
alternative hypothesis: true equivalence limit d0 is less than 0.4444444
95 percent confidence interval:
   0 NaN
sample estimates:
Sorensen dissimilarity 
                     1 
attr(,"se")
standard error 
             0 



$miscellaneous
$miscellaneous$atlas

	Normal asymptotic test for 2x2 contingency tables based on the
	Sorensen-Dice dissimilarity

data:  tab
(d - d0) / se = -2.208, p-value = 0.01362
alternative hypothesis: true equivalence limit d0 is less than 0.4444444
95 percent confidence interval:
 0.0000000 0.4314904
sample estimates:
Sorensen dissimilarity 
             0.3936508 
attr(,"se")
standard error 
    0.02300482 


$miscellaneous$cangenes

	No test performed due not finite (d - d0) / se statistic

data:  tab
(d - d0) / se = Inf, p-value = NA
alternative hypothesis: true equivalence limit d0 is less than 0.4444444
95 percent confidence interval:
   0 NaN
sample estimates:
Sorensen dissimilarity 
                     1 
attr(,"se")
standard error 
             0 


$miscellaneous$cis

	Normal asymptotic test for 2x2 contingency tables based on the
	Sorensen-Dice dissimilarity

data:  tab
(d - d0) / se = 2.9448, p-value = 0.9984
alternative hypothesis: true equivalence limit d0 is less than 0.4444444
95 percent confidence interval:
 0.0000000 0.6094825
sample estimates:
Sorensen dissimilarity 
             0.5503356 
attr(,"se")
standard error 
    0.03595877 



$sanger
$sanger$atlas

	Normal asymptotic test for 2x2 contingency tables based on the
	Sorensen-Dice dissimilarity

data:  tab
(d - d0) / se = -3.1077, p-value = 0.0009429
alternative hypothesis: true equivalence limit d0 is less than 0.4444444
95 percent confidence interval:
 0.0000000 0.4116647
sample estimates:
Sorensen dissimilarity 
             0.3748056 
attr(,"se")
standard error 
    0.02240875 


$sanger$cangenes

	No test performed due not finite (d - d0) / se statistic

data:  tab
(d - d0) / se = Inf, p-value = NA
alternative hypothesis: true equivalence limit d0 is less than 0.4444444
95 percent confidence interval:
   0 NaN
sample estimates:
Sorensen dissimilarity 
                     1 
attr(,"se")
standard error 
             0 


$sanger$cis

	Normal asymptotic test for 2x2 contingency tables based on the
	Sorensen-Dice dissimilarity

data:  tab
(d - d0) / se = 4.0855, p-value = 1
alternative hypothesis: true equivalence limit d0 is less than 0.4444444
95 percent confidence interval:
 0.0000000 0.6463915
sample estimates:
Sorensen dissimilarity 
             0.5884244 
attr(,"se")
standard error 
    0.03524148 


$sanger$miscellaneous

	Normal asymptotic test for 2x2 contingency tables based on the
	Sorensen-Dice dissimilarity

data:  tab
(d - d0) / se = -5.5254, p-value = 1.643e-08
alternative hypothesis: true equivalence limit d0 is less than 0.4444444
95 percent confidence interval:
 0.0000000 0.3475558
sample estimates:
Sorensen dissimilarity 
             0.3064877 
attr(,"se")
standard error 
    0.02496764 



$Vogelstein
$Vogelstein$atlas

	Normal asymptotic test for 2x2 contingency tables based on the
	Sorensen-Dice dissimilarity

data:  tab
(d - d0) / se = -4.5244, p-value = 3.028e-06
alternative hypothesis: true equivalence limit d0 is less than 0.4444444
95 percent confidence interval:
 0.0000000 0.3826602
sample estimates:
Sorensen dissimilarity 
             0.3473684 
attr(,"se")
standard error 
     0.0214559 


$Vogelstein$cangenes

	No test performed due not finite (d - d0) / se statistic

data:  tab
(d - d0) / se = Inf, p-value = NA
alternative hypothesis: true equivalence limit d0 is less than 0.4444444
95 percent confidence interval:
   0 NaN
sample estimates:
Sorensen dissimilarity 
                     1 
attr(,"se")
standard error 
             0 


$Vogelstein$cis

	Normal asymptotic test for 2x2 contingency tables based on the
	Sorensen-Dice dissimilarity

data:  tab
(d - d0) / se = 5.2254, p-value = 1
alternative hypothesis: true equivalence limit d0 is less than 0.4444444
95 percent confidence interval:
 0.0000000 0.6773931
sample estimates:
Sorensen dissimilarity 
             0.6216216 
attr(,"se")
standard error 
    0.03390663 


$Vogelstein$miscellaneous

	Normal asymptotic test for 2x2 contingency tables based on the
	Sorensen-Dice dissimilarity

data:  tab
(d - d0) / se = -4.1614, p-value = 1.582e-05
alternative hypothesis: true equivalence limit d0 is less than 0.4444444
95 percent confidence interval:
 0.0000000 0.3806901
sample estimates:
Sorensen dissimilarity 
             0.3390192 
attr(,"se")
standard error 
    0.02533414 


$Vogelstein$sanger

	Normal asymptotic test for 2x2 contingency tables based on the
	Sorensen-Dice dissimilarity

data:  tab
(d - d0) / se = -21.248, p-value < 2.2e-16
alternative hypothesis: true equivalence limit d0 is less than 0.4444444
95 percent confidence interval:
 0.0000000 0.1415942
sample estimates:
Sorensen dissimilarity 
             0.1161826 
attr(,"se")
standard error 
    0.01544915 



$waldman
$waldman$atlas

	Normal asymptotic test for 2x2 contingency tables based on the
	Sorensen-Dice dissimilarity

data:  tab
(d - d0) / se = -8.5662, p-value < 2.2e-16
alternative hypothesis: true equivalence limit d0 is less than 0.4444444
95 percent confidence interval:
 0.0000000 0.3121232
sample estimates:
Sorensen dissimilarity 
              0.280677 
attr(,"se")
standard error 
    0.01911793 


$waldman$cangenes

	No test performed due not finite (d - d0) / se statistic

data:  tab
(d - d0) / se = Inf, p-value = NA
alternative hypothesis: true equivalence limit d0 is less than 0.4444444
95 percent confidence interval:
   0 NaN
sample estimates:
Sorensen dissimilarity 
                     1 
attr(,"se")
standard error 
             0 


$waldman$cis

	Normal asymptotic test for 2x2 contingency tables based on the
	Sorensen-Dice dissimilarity

data:  tab
(d - d0) / se = 5.4447, p-value = 1
alternative hypothesis: true equivalence limit d0 is less than 0.4444444
95 percent confidence interval:
 0.0000000 0.6704794
sample estimates:
Sorensen dissimilarity 
             0.6180371 
attr(,"se")
standard error 
    0.03188266 


$waldman$miscellaneous

	Normal asymptotic test for 2x2 contingency tables based on the
	Sorensen-Dice dissimilarity

data:  tab
(d - d0) / se = -10.523, p-value < 2.2e-16
alternative hypothesis: true equivalence limit d0 is less than 0.4444444
95 percent confidence interval:
 0.0000000 0.2618917
sample estimates:
Sorensen dissimilarity 
             0.2280702 
attr(,"se")
standard error 
    0.02056206 


$waldman$sanger

	Normal asymptotic test for 2x2 contingency tables based on the
	Sorensen-Dice dissimilarity

data:  tab
(d - d0) / se = -4.9774, p-value = 3.222e-07
alternative hypothesis: true equivalence limit d0 is less than 0.4444444
95 percent confidence interval:
 0.0000000 0.3658088
sample estimates:
Sorensen dissimilarity 
             0.3269962 
attr(,"se")
standard error 
    0.02359637 


$waldman$Vogelstein

	Normal asymptotic test for 2x2 contingency tables based on the
	Sorensen-Dice dissimilarity

data:  tab
(d - d0) / se = -6.6979, p-value = 1.057e-11
alternative hypothesis: true equivalence limit d0 is less than 0.4444444
95 percent confidence interval:
 0.0000000 0.3321681
sample estimates:
Sorensen dissimilarity 
             0.2956204 
attr(,"se")
standard error 
    0.02221937 



attr(,"class")
[1] "equivSDhtestList" "list"            
> class(allTests)
[1] "equivSDhtestList" "list"            
> set.seed(123)
> allBootTests <- equivTestSorensen(cont_all_BP4, boot = TRUE)
> allBootTests
$cangenes
$cangenes$atlas

	No test performed due not finite (d - d0) / se statistic

data:  tab
(d - d0) / se = Inf, p-value = NA
alternative hypothesis: true equivalence limit d0 is less than 0.4444444
95 percent confidence interval:
   0 NaN
sample estimates:
Sorensen dissimilarity 
                     1 
attr(,"se")
standard error 
             0 



$cis
$cis$atlas

	Bootstrap test for 2x2 contingency tables based on the Sorensen-Dice
	dissimilarity (10000 bootstrap replicates)

data:  tab
(d - d0) / se = 9.3376, p-value = 1
alternative hypothesis: true equivalence limit d0 is less than 0.4444444
95 percent confidence interval:
 0.0000000 0.7400086
sample estimates:
Sorensen dissimilarity 
             0.6963563 
attr(,"se")
standard error 
    0.02697813 


$cis$cangenes

	No test performed due not finite (d - d0) / se statistic

data:  tab
(d - d0) / se = Inf, p-value = NA
alternative hypothesis: true equivalence limit d0 is less than 0.4444444
95 percent confidence interval:
   0 NaN
sample estimates:
Sorensen dissimilarity 
                     1 
attr(,"se")
standard error 
             0 



$miscellaneous
$miscellaneous$atlas

	Bootstrap test for 2x2 contingency tables based on the Sorensen-Dice
	dissimilarity (10000 bootstrap replicates)

data:  tab
(d - d0) / se = -2.208, p-value = 0.0164
alternative hypothesis: true equivalence limit d0 is less than 0.4444444
95 percent confidence interval:
 0.000000 0.431994
sample estimates:
Sorensen dissimilarity 
             0.3936508 
attr(,"se")
standard error 
    0.02300482 


$miscellaneous$cangenes

	No test performed due not finite (d - d0) / se statistic

data:  tab
(d - d0) / se = Inf, p-value = NA
alternative hypothesis: true equivalence limit d0 is less than 0.4444444
95 percent confidence interval:
   0 NaN
sample estimates:
Sorensen dissimilarity 
                     1 
attr(,"se")
standard error 
             0 


$miscellaneous$cis

	Bootstrap test for 2x2 contingency tables based on the Sorensen-Dice
	dissimilarity (10000 bootstrap replicates)

data:  tab
(d - d0) / se = 2.9448, p-value = 0.9974
alternative hypothesis: true equivalence limit d0 is less than 0.4444444
95 percent confidence interval:
 0.0000000 0.6097785
sample estimates:
Sorensen dissimilarity 
             0.5503356 
attr(,"se")
standard error 
    0.03595877 



$sanger
$sanger$atlas

	Bootstrap test for 2x2 contingency tables based on the Sorensen-Dice
	dissimilarity (10000 bootstrap replicates)

data:  tab
(d - d0) / se = -3.1077, p-value = 0.0017
alternative hypothesis: true equivalence limit d0 is less than 0.4444444
95 percent confidence interval:
 0.000000 0.412172
sample estimates:
Sorensen dissimilarity 
             0.3748056 
attr(,"se")
standard error 
    0.02240875 


$sanger$cangenes

	No test performed due not finite (d - d0) / se statistic

data:  tab
(d - d0) / se = Inf, p-value = NA
alternative hypothesis: true equivalence limit d0 is less than 0.4444444
95 percent confidence interval:
   0 NaN
sample estimates:
Sorensen dissimilarity 
                     1 
attr(,"se")
standard error 
             0 


$sanger$cis

	Bootstrap test for 2x2 contingency tables based on the Sorensen-Dice
	dissimilarity (10000 bootstrap replicates)

data:  tab
(d - d0) / se = 4.0855, p-value = 0.9999
alternative hypothesis: true equivalence limit d0 is less than 0.4444444
95 percent confidence interval:
 0.0000000 0.6467971
sample estimates:
Sorensen dissimilarity 
             0.5884244 
attr(,"se")
standard error 
    0.03524148 


$sanger$miscellaneous

	Bootstrap test for 2x2 contingency tables based on the Sorensen-Dice
	dissimilarity (10000 bootstrap replicates)

data:  tab
(d - d0) / se = -5.5254, p-value = 9.999e-05
alternative hypothesis: true equivalence limit d0 is less than 0.4444444
95 percent confidence interval:
 0.0000000 0.3498904
sample estimates:
Sorensen dissimilarity 
             0.3064877 
attr(,"se")
standard error 
    0.02496764 



$Vogelstein
$Vogelstein$atlas

	Bootstrap test for 2x2 contingency tables based on the Sorensen-Dice
	dissimilarity (10000 bootstrap replicates)

data:  tab
(d - d0) / se = -4.5244, p-value = 2e-04
alternative hypothesis: true equivalence limit d0 is less than 0.4444444
95 percent confidence interval:
 0.0000000 0.3828507
sample estimates:
Sorensen dissimilarity 
             0.3473684 
attr(,"se")
standard error 
     0.0214559 


$Vogelstein$cangenes

	No test performed due not finite (d - d0) / se statistic

data:  tab
(d - d0) / se = Inf, p-value = NA
alternative hypothesis: true equivalence limit d0 is less than 0.4444444
95 percent confidence interval:
   0 NaN
sample estimates:
Sorensen dissimilarity 
                     1 
attr(,"se")
standard error 
             0 


$Vogelstein$cis

	Bootstrap test for 2x2 contingency tables based on the Sorensen-Dice
	dissimilarity (10000 bootstrap replicates)

data:  tab
(d - d0) / se = 5.2254, p-value = 1
alternative hypothesis: true equivalence limit d0 is less than 0.4444444
95 percent confidence interval:
 0.0000000 0.6775108
sample estimates:
Sorensen dissimilarity 
             0.6216216 
attr(,"se")
standard error 
    0.03390663 


$Vogelstein$miscellaneous

	Bootstrap test for 2x2 contingency tables based on the Sorensen-Dice
	dissimilarity (10000 bootstrap replicates)

data:  tab
(d - d0) / se = -4.1614, p-value = 9.999e-05
alternative hypothesis: true equivalence limit d0 is less than 0.4444444
95 percent confidence interval:
 0.0000000 0.3818835
sample estimates:
Sorensen dissimilarity 
             0.3390192 
attr(,"se")
standard error 
    0.02533414 


$Vogelstein$sanger

	Bootstrap test for 2x2 contingency tables based on the Sorensen-Dice
	dissimilarity (10000 bootstrap replicates)

data:  tab
(d - d0) / se = -21.248, p-value = 9.999e-05
alternative hypothesis: true equivalence limit d0 is less than 0.4444444
95 percent confidence interval:
 0.0000000 0.1438618
sample estimates:
Sorensen dissimilarity 
             0.1161826 
attr(,"se")
standard error 
    0.01544915 



$waldman
$waldman$atlas

	Bootstrap test for 2x2 contingency tables based on the Sorensen-Dice
	dissimilarity (10000 bootstrap replicates)

data:  tab
(d - d0) / se = -8.5662, p-value = 9.999e-05
alternative hypothesis: true equivalence limit d0 is less than 0.4444444
95 percent confidence interval:
 0.000000 0.313061
sample estimates:
Sorensen dissimilarity 
              0.280677 
attr(,"se")
standard error 
    0.01911793 


$waldman$cangenes

	No test performed due not finite (d - d0) / se statistic

data:  tab
(d - d0) / se = Inf, p-value = NA
alternative hypothesis: true equivalence limit d0 is less than 0.4444444
95 percent confidence interval:
   0 NaN
sample estimates:
Sorensen dissimilarity 
                     1 
attr(,"se")
standard error 
             0 


$waldman$cis

	Bootstrap test for 2x2 contingency tables based on the Sorensen-Dice
	dissimilarity (10000 bootstrap replicates)

data:  tab
(d - d0) / se = 5.4447, p-value = 1
alternative hypothesis: true equivalence limit d0 is less than 0.4444444
95 percent confidence interval:
 0.0000000 0.6710143
sample estimates:
Sorensen dissimilarity 
             0.6180371 
attr(,"se")
standard error 
    0.03188266 


$waldman$miscellaneous

	Bootstrap test for 2x2 contingency tables based on the Sorensen-Dice
	dissimilarity (10000 bootstrap replicates)

data:  tab
(d - d0) / se = -10.523, p-value = 9.999e-05
alternative hypothesis: true equivalence limit d0 is less than 0.4444444
95 percent confidence interval:
 0.0000000 0.2638861
sample estimates:
Sorensen dissimilarity 
             0.2280702 
attr(,"se")
standard error 
    0.02056206 


$waldman$sanger

	Bootstrap test for 2x2 contingency tables based on the Sorensen-Dice
	dissimilarity (10000 bootstrap replicates)

data:  tab
(d - d0) / se = -4.9774, p-value = 9.999e-05
alternative hypothesis: true equivalence limit d0 is less than 0.4444444
95 percent confidence interval:
 0.0000000 0.3668027
sample estimates:
Sorensen dissimilarity 
             0.3269962 
attr(,"se")
standard error 
    0.02359637 


$waldman$Vogelstein

	Bootstrap test for 2x2 contingency tables based on the Sorensen-Dice
	dissimilarity (10000 bootstrap replicates)

data:  tab
(d - d0) / se = -6.6979, p-value = 9.999e-05
alternative hypothesis: true equivalence limit d0 is less than 0.4444444
95 percent confidence interval:
 0.000000 0.334067
sample estimates:
Sorensen dissimilarity 
             0.2956204 
attr(,"se")
standard error 
    0.02221937 



attr(,"class")
[1] "equivSDhtestList" "list"            
> class(allBootTests)
[1] "equivSDhtestList" "list"            
> getPvalue(allBootTests, simplify = FALSE)
                   atlas cangenes       cis miscellaneous     sanger Vogelstein
atlas         0.00000000      NaN 1.0000000    0.01639836 0.00169983 0.00019998
cangenes             NaN        0       NaN           NaN        NaN        NaN
cis           1.00000000      NaN 0.0000000    0.99740026 0.99990001 1.00000000
miscellaneous 0.01639836      NaN 0.9974003    0.00000000 0.00009999 0.00009999
sanger        0.00169983      NaN 0.9999000    0.00009999 0.00000000 0.00009999
Vogelstein    0.00019998      NaN 1.0000000    0.00009999 0.00009999 0.00000000
waldman       0.00009999      NaN 1.0000000    0.00009999 0.00009999 0.00009999
                waldman
atlas         9.999e-05
cangenes            NaN
cis           1.000e+00
miscellaneous 9.999e-05
sanger        9.999e-05
Vogelstein    9.999e-05
waldman       0.000e+00
> getEffNboot(allBootTests)
          cangenes.atlas                cis.atlas             cis.cangenes 
                     NaN                    10000                      NaN 
     miscellaneous.atlas   miscellaneous.cangenes        miscellaneous.cis 
                   10000                      NaN                    10000 
            sanger.atlas          sanger.cangenes               sanger.cis 
                   10000                      NaN                    10000 
    sanger.miscellaneous         Vogelstein.atlas      Vogelstein.cangenes 
                   10000                    10000                      NaN 
          Vogelstein.cis Vogelstein.miscellaneous        Vogelstein.sanger 
                   10000                    10000                    10000 
           waldman.atlas         waldman.cangenes              waldman.cis 
                   10000                      NaN                    10000 
   waldman.miscellaneous           waldman.sanger       waldman.Vogelstein 
                   10000                    10000                    10000 
> 
> # To adjust for testing multiplicity:
> p.adjust(getPvalue(allBootTests), method = "holm")
          cangenes.atlas.p-value                cis.atlas.p-value 
                             NaN                       1.00000000 
            cis.cangenes.p-value      miscellaneous.atlas.p-value 
                             NaN                       0.09839016 
  miscellaneous.cangenes.p-value        miscellaneous.cis.p-value 
                             NaN                       1.00000000 
            sanger.atlas.p-value          sanger.cangenes.p-value 
                      0.01189881                              NaN 
              sanger.cis.p-value     sanger.miscellaneous.p-value 
                      1.00000000                       0.00149985 
        Vogelstein.atlas.p-value      Vogelstein.cangenes.p-value 
                      0.00159984                              NaN 
          Vogelstein.cis.p-value Vogelstein.miscellaneous.p-value 
                      1.00000000                       0.00149985 
       Vogelstein.sanger.p-value            waldman.atlas.p-value 
                      0.00149985                       0.00149985 
        waldman.cangenes.p-value              waldman.cis.p-value 
                             NaN                       1.00000000 
   waldman.miscellaneous.p-value           waldman.sanger.p-value 
                      0.00149985                       0.00149985 
      waldman.Vogelstein.p-value 
                      0.00149985 
> 
> # If only partial statistics are desired:
> dSorensen(cont_all_BP4)
                  atlas cangenes       cis miscellaneous    sanger Vogelstein
atlas         0.0000000        1 0.6963563     0.3936508 0.3748056  0.3473684
cangenes      1.0000000        0 1.0000000     1.0000000 1.0000000  1.0000000
cis           0.6963563        1 0.0000000     0.5503356 0.5884244  0.6216216
miscellaneous 0.3936508        1 0.5503356     0.0000000 0.3064877  0.3390192
sanger        0.3748056        1 0.5884244     0.3064877 0.0000000  0.1161826
Vogelstein    0.3473684        1 0.6216216     0.3390192 0.1161826  0.0000000
waldman       0.2806770        1 0.6180371     0.2280702 0.3269962  0.2956204
                waldman
atlas         0.2806770
cangenes      1.0000000
cis           0.6180371
miscellaneous 0.2280702
sanger        0.3269962
Vogelstein    0.2956204
waldman       0.0000000
> duppSorensen(cont_all_BP4)
                  atlas cangenes       cis miscellaneous    sanger Vogelstein
atlas         0.0000000      NaN 0.7407313     0.4314904 0.4116647  0.3826602
cangenes            NaN        0       NaN           NaN       NaN        NaN
cis           0.7407313      NaN 0.0000000     0.6094825 0.6463915  0.6773931
miscellaneous 0.4314904      NaN 0.6094825     0.0000000 0.3475558  0.3806901
sanger        0.4116647      NaN 0.6463915     0.3475558 0.0000000  0.1415942
Vogelstein    0.3826602      NaN 0.6773931     0.3806901 0.1415942  0.0000000
waldman       0.3121232      NaN 0.6704794     0.2618917 0.3658088  0.3321681
                waldman
atlas         0.3121232
cangenes            NaN
cis           0.6704794
miscellaneous 0.2618917
sanger        0.3658088
Vogelstein    0.3321681
waldman       0.0000000
> seSorensen(cont_all_BP4)
                   atlas cangenes        cis miscellaneous     sanger
atlas         0.00000000        0 0.02697813    0.02300482 0.02240875
cangenes      0.00000000        0 0.00000000    0.00000000 0.00000000
cis           0.02697813        0 0.00000000    0.03595877 0.03524148
miscellaneous 0.02300482        0 0.03595877    0.00000000 0.02496764
sanger        0.02240875        0 0.03524148    0.02496764 0.00000000
Vogelstein    0.02145590        0 0.03390663    0.02533414 0.01544915
waldman       0.01911793        0 0.03188266    0.02056206 0.02359637
              Vogelstein    waldman
atlas         0.02145590 0.01911793
cangenes      0.00000000 0.00000000
cis           0.03390663 0.03188266
miscellaneous 0.02533414 0.02056206
sanger        0.01544915 0.02359637
Vogelstein    0.00000000 0.02221937
waldman       0.02221937 0.00000000
> 
> 
> # Tipically, in a real study it would be interesting to scan tests
> # along some ontologies and levels inside these ontologies:
> # (which obviously will be a quite slow process)
> # gc()
> # set.seed(123)
> # allBootTests_BP_MF_lev4to8 <- allEquivTestSorensen(allOncoGeneLists,
> #                                                    boot = TRUE,
> #                                                    geneUniverse = humanEntrezIDs, orgPackg = "org.Hs.eg.db",
> #                                                    ontos = c("BP", "MF"), GOLevels = 4:8)
> # getPvalue(allBootTests_BP_MF_lev4to8)
> # getEffNboot(allBootTests_BP_MF_lev4to8)
> 
> proc.time()
   user  system elapsed 
151.800   2.983 154.792 

Example timings

goSorensen.Rcheck/goSorensen-Ex.timings

nameusersystemelapsed
allBuildEnrichTable0.0000.0000.001
allEquivTestSorensen0.2390.0250.264
allHclustThreshold0.0580.0030.060
allSorenThreshold0.0540.0010.055
buildEnrichTable56.892 1.48558.382
dSorensen0.0750.0150.095
duppSorensen0.1130.0100.123
enrichedIn47.274 0.62247.897
equivTestSorensen0.3150.0100.326
getDissimilarity0.2040.0430.247
getEffNboot1.0860.0151.101
getNboot1.0930.0271.120
getPvalue0.2030.0470.250
getSE0.2330.0470.278
getTable0.2180.0840.301
getUpper0.1930.0510.245
hclustThreshold0.2160.0070.224
nice2x2Table0.0020.0000.003
seSorensen0.0010.0010.003
sorenThreshold0.2130.0030.215
upgrade0.5450.1450.689