GCC2018 + BOSC 2018: The Bioinformatics Community Conference
June 25-30, Portland, Oregon, United States

The joint 2018 Galaxy Community and Bioinformatics Open Source Conferences start with training and the training topics that will be offered are determined by you. Please take a few minutes to consider which training topics you would like to see offered at GCCBOSC2018.

GCCBOSC2018 will be held 25-30 June in Portland, Oregon, United States. It will feature two days of training: the second of which is multi-track and will feature content for both the BOSC and Galaxy communities.

Nominated Topics

Topics were nominated by the community, and refined by potential instructors. Topics are categorized below by the potential audience and the interface(s) to be used in the workshop.

Code Explanation
Audience BB Beginning Bioinformatics Analyst
XB Experienced Bioinformatics Analyst
IP Infrastructure Provider
TD Tool Developer
Interface GG Galaxy or other Graphical User Interface
CL Command line, scripting and/or basic programming

Q: What makes a bioinformatic analyst experienced?
A: Attending at least one beginning bioinformatics session.

The topics:

BB XB IP TD GG CL Topic
BB XB - - - - Setting up for success: Everything you need to know when planning for an RNA-seq analysis
BB XB - - GG - Galaxy 101 - A gentle introduction to Galaxy
BB XB - - GG - RNA-Seq Analysis in Galaxy
BB XB - - GG - Galaxy for Power Users
BB XB - - GG - How to analyze microbiota data with Galaxy
BB XB - - GG - ChIPseq analysis using deepTools and MACS2
BB XB - - GG - Hi-C analysis in Galaxy
BB XB - - GG - Galaxy For Proteogenomics !
BB XB - - GG - Small genome de novo assembly using Galaxy
BB XB - - GG CL GATK4
BB XB - - GG CL Data Carpentry Genomics Workshop: Data Organization and Automation with Shell
- XB IP - GG - Bioinformatics Training and Education with the Galaxy Training Network
- XB - - GG CL Handling integrated biological data using Python (or R) and InterMine
- XB IP - - CL Practical use of the Galaxy API command line tools
- XB IP - - CL Building a Community Genome Database with Tripal v3
- XB IP TD - CL Workflow Description Language
- XB IP TD - CL Community built analyses that run everywhere with bcbio
- XB IP TD - CL Introduction to Common Workflow Language
- XB IP TD GG CL Deploying (Galaxy and your) applications into clouds
- - IP TD - CL Conda and Containers
- - IP TD - CL Writing & Publishing Galaxy Tools
- - IP - - CL Setting up a Galaxy instance as a service
- - IP - - CL Advanced customization of a Galaxy instance
- - IP - - CL Advanced accelerated Galaxy admin
- - IP - - CL Administration of Galaxy Infrastructures with Puppet
- - IP - - CL Galaxy Interactive Environments
- - IP - - CL Adding Galaxy Workflows to a Tripal Website
- - IP - - CL The Galaxy Docker Project
- - IP - - CL The Galaxy Database Schema
- - IP - - - Galaxy Architecture

Setting up for success: Everything you need to know when planning for an RNA-seq analysis

This workshop is geared towards researchers who are thinking about conducting an RNA-seq experiment and are interested in knowing more about what is involved. The planning process requires taking a step back to evaluate various factors and ultimately assess the feasibility of the experiment, expecting and avoiding potential pitfalls. ​ The workshop will go into detail about the different strategies for working with RNA-seq data depending on the biological question on hand:

  • Best practice guidelines for experimental design (Biological replicates, Paired-end vs Single-end, Sequencing depth).
  • Data storage and computational requirements.
  • Overview of commonly used workflows for differential gene expression, de-novo assembly, isoform quantification and other uses of RNA sequencing.

The focus of this workshop is to outline current standards and required resources for the analysis of RNA sequencing data. This workshop will not provide an exhaustive list of software tools or pipelines available; rather it aims to provide a fruitful discussion on how best to prepare for performing RNA-seq data analysis from the lab to manuscript preparation.

Prerequisites:

  • None

Galaxy 101 - A gentle introduction to Galaxy

This workshop will focus on introducing the Galaxy user interface and how it can be used to analyze large datasets. We will cover the basic features of Galaxy, including where to find tools, how to import and use your data, and an introduction to workflows. This session is recommended for anyone who has not used, or only rarely uses Galaxy.

Prerequisites:

  • Little or no experience using Galaxy.
  • A wi-fi enabled laptop with a modern web browser. Google Chrome, Firefox and Safari will work best.

RNA-Seq Analysis in Galaxy

This workshop will introduce the concepts behind transcriptomics with NGS data and how to analyze this data in Galaxy. Specifically, this workshop will focus on de novo transcriptome reconstruction of RNA-seq data with the following goals:

  • comprehensive identification of all transcripts across an experiment
  • appropriately annotating classes of transcripts
  • generating abundance estimates across a transcriptome
  • significance testing of differentially expressed transcripts
  • visualisation of reads and transcript structures

Prerequisites:

  • a general knowledge of Galaxy (for example, you should be familiar with the material in Galaxy 101 or have attended Introduction to Galaxy).
  • a wi-fi enabled laptop with a modern web browser. Google Chrome, Firefox and Safari will work best.

Galaxy for Power Users

Learn new tricks to optimize your research.

  • scratch-book
  • (propagating) tags
  • history searching and filtering
  • collections and lists
  • post-job triggers
  • data libraries
  • (hierarchical ) upload to collections
  • other QoL tricks

Prerequisites:

  • At least attendance of Galaxy 101.
  • A wi-fi enabled laptop with a modern web browser. Google Chrome, Firefox and Safari will work best.

How to analyze microbiota data with Galaxy

The study of complex microorganism communities has been eased by the development of sequencing platforms and dedicated powerful bioinformatics tools. Several tools have recently been integrated into Galaxy for microbiota data analysis: Mothur, QIIME, MetaPhlAN, HUMAnN, FROGS, MEGAHIT, MetaSPAdes,...

In this training, we will show in this training how to analyze metagenomic and amplicon data inside Galaxy:

  • Extraction of the OTUs using QIIME/Mothur
  • Reconstruction of the taxonomic composition of a sample without OTUs using MetaPhlAn
  • Find the metabolic functions realized in an environment using HUMAnN

Prerequisites

  • Galaxy 101 or equivalent experience
  • Ideally participants will already be familiar with the concepts behind metagenomics (e.g., OTU)
  • A wi-fi enabled laptop with a modern web browser. Google Chrome, Firefox and Safari will work best

ChIPseq analysis using deepTools and MACS2

Did my IP work? Where is my signal? How well do my replicates correlate? What might my peaks even look like? Where are my peaks (or signal) in relationship to transcription start sites (or other features)? These are common questions that biologists first pose when dealing with ChIPseq data. We will use deepTools and MACS within Galaxy to demonstrate effective methods of

  1. performing ChIPseq-specific quality control,
  2. calling peaks and
  3. visualising signal and peak enrichment around genes or other features." "

Prerequisites

  • Galaxy 101 or equivalent experience.
  • Ideally participants will already be familiar with generic NGS quality control and read mapping, since those won't be covered
  • A wi-fi enabled laptop with a modern web browser. Google Chrome, Firefox and Safari will work best. "

Hi-C analysis in Galaxy

This session will introduce the basics of chromosome conformation capture assays and their applications, followed by best practices in mapping, QC'ing, visualazing and assigning 'topological associated domains' with Hi-C data.

Prerequisites

  • Understanding of chromosome conformation capture (3C) and variants (HiC, 5C, 4C). Understanding of Illumina based "NGS".
  • Understanding of Galaxy user interface.

Galaxy For Proteogenomics !

Large-scale ‘omics’ data generation is driven by high throughput genome and transcriptome sequencing, and proteome characterization using mass spectrometry. As a result, many researchers are turning to generating integrative analysis of these ‘multi-omics’ datasets given the great potential to provide novel biological insights. These multi-omics applications are particularly challenging for data analysis, as they require the use of multiple, domain-specific software programs on scalable infrastructure capable of handling the computing and storage needs of this large-scale data.

Galaxy has been shown to solve the problems of software integration and scalability for multi-omics analysis, and additionally offer benefits for sharing complete workflows in a user-friendly environment accessible by wet-bench scientists. In this workshop, we will introduce the use of Galaxy platform for multi-omics data analysis applications. As a representative example, we will focus on the maturing field of proteogenomic applications. Proteogenomics most commonly integrates RNA-Seq data, for generating customized protein sequence databases, with mass spectrometry-based proteomics data, which are matched to these databases to identify novel protein sequence variants. (Cancer Res. (2017); 77(21):e43-e46. doi: 10.1158/0008-5472.CAN-17-0331.)

Workshop Content:

The course will include a basic introduction to proteomics and will include a hands-on session that will cover use of analytical workflows to generate proteomic databases from RNASeq data. The use and understanding of modules for analytical workflows for proteogenomics analysis will be discussed.

Workshop Schedule:

  • Introduction to multi-omic studies
  • RNASeq Data Processing: Generating protein sequence search databases using Galaxy platform
  • Hands-on session for proteomics data analysis using Galaxy
  • Identification of novel proteoforms and visualization

Workshop Goals:

  • Introduce the Galaxy framework as a solution for data analysis across ‘omics’ domains
  • Demonstrate use of Galaxy for a proteogenomic analysis (RNA-seq and proteomic integrative analysis)
  • Lay the foundation for attendees to implement Galaxy at their own facility or institution to meet ‘omics’ data analysis and informatics needs

Prerequisites:

  • Users will need laptop for hands-on training

Small genome de novo assembly using Galaxy

Workshop will cover the basics of de novo genome assembly using a small genome example. This includes project planning steps, selecting fragment sizes, initial assembly of reads into fully covered contigs, and then assembling those contigs into larger scaffolds that may include gaps. The end result will be a set of contigs and scaffolds with sufficient average length to perform further analysis on, including genome annotation. This workshop will use tools and methods targeted at small genomes. The basics of assembly and scaffolding presented here will be useful for building larger genomes, but the specific tools and much of the project planning will be different.

Prerequisites

  • Galaxy 101
  • A wifi enabled laptop with a web browser

GATK4

  1. Intro to GATK
    • reminder of what GATK is about; purpose, variant calling for newbs, major file formats and so on
  2. What's new in GATK4 specifically
    • new syntax/invocations, new engine capabilities, new scope of analysis, key improvements, tips & tricks.
  3. Options for running GATK -- hands-on
    • run "straight up" on laptop (with Docker)
    • run Spark tools on Google Dataproc
    • run pipeline on Cromwell on laptop
    • run pipeline on Google via Cromwell+Pipelines API (some scripty goodness)
    • run pipeline on FireCloud in GUI (briefly)
    • run pipeline on FireCloud via API + python bindings (lots of scripty goodness)
    • run pipeline on Galaxy

Data Carpentry Genomics Workshop: Data Organization and Automation with Shell

The goal of this one-day tutorial is to teach participants the fundamental data management and analysis skills and needed to conduct genomics research including: best practices for organization of bioinformatics projects and data as well as use of command line utilities for managing large volumes of genomic data. This tutorial is derived from the Data Carpentry Genomics Workshop, focused on the organization and intro to command line lessons. The tutorial uses active learning and hands-on practice to teach participants the skills and perspectives needed to work effectively with genomic data. Data Carpentry’s aim is to teach researchers basic concepts, skills, and tools for working with data so that they can get more done in less time, and with less pain.

Lessons:

The tutorial assumes no prior experience with the tools covered. However, learners are expected to have some familiarity with biological concepts, including nucleotide abbreviations. Participants should bring their laptops and plan to participate actively.

Bioinformatics Training and Education with the Galaxy Training Network

Galaxy with its flexibility, reproducibility, and scalability is an ideal environment for teaching and training diverse scientific topics.

The Galaxy Training Network is a community initiative dedicated to high-quality Galaxy-based training around the world. One of its objectives is to support trainers with complete training material and recommendations about bioinformatics training. Templates and best training practices were defined to help trainers create new material, unify the different material, and make training materials more accessible and easy for users to learn and for teachers to teach (https://training.galaxyproject.org).

This workshop will introduce participants to the infrastructure of the GTN training materials and describe how to generate training materials following best practices. Participants will be introduced to the generation Galaxy Interactive Tours and the creation of Docker Flavors dedicated to teaching and training sessions. The workshop will also cover best practices for running Galaxy-based workshops (how to plan a training session based on number of attendees, time constraints, resource availability, etc).

Prerequisites

  • Basic familiarity with using Galaxy (how to import datasets and run tools)
  • Basic familiarity with git and Docker will also be helpful for parts of the workshop.
  • A wi-fi enabled laptop with a modern web browser. Google Chrome or Firefox will work best.

Handling integrated biological data using Python (or R) and InterMine

This tutorial will guide you through loading and analyzing integrated biological data (generally genomic or proteomic) in InterMine via an API in Python or R. Topics covered will include automatically generating code to perform queries, customising the code to meet your needs, and automated analysis of sets, e.g gene sets, including enrichment statistics. Skills gained can be re-used in any of the dozens of InterMines available, covering a broad range of organisms and dedicated purposes, from model organisms to plants, drug targets, and mitochondrial DNA.

Prerequisites

  • Basic Python or R skills are advantageous but not required.
  • A laptop with wifi and Python or R Studio.

Practical use of the Galaxy API command line tools

How to use the Galaxy API to automate workflows.

Galaxy has an always-growing API that allows for external programs to upload and download data, manage histories and datasets, run tools and workflows, and even perform admin tasks. This session will cover various approaches to access the API, in particular using the BioBlend Python library.

Prerequisites

  • Unix command line
  • Basic understanding of Galaxy from a developer point of view.
  • Python programming.
  • A wi-fi enabled laptop with a modern web browser. Google Chrome, Firefox and Safari will work best.

Building a Community Genome Database with Tripal v3

Tripal v3 is the newest version of a popular and open source genomics website construction software. Using Docker containers, we will install a Tripal v3 site (including Apache, PostGreSQL, and Drupal), to create an empty genome database and site. We will review how to load many data types used by many of the core and extension Drupal modules, including organisms, analyses, genes and genomes, functional annotations and controlled vocabularies. We will provide credits on the XSEDE cloud system Jetstream for this training, so students can follow along on their own laptops.

Prerequisites:

  • Linux command line.

Workflow Description Language

The advent of open, portable workflow languages is an exciting development which allows for the definition of a workflow to be decoupled from the execution. One can create workflows which can run unmodified on local compute, HPC clusters, or the cloud. As these languages are not tied to a specific execution environment, the descriptions can easily be shared, discovered and even composed together to form more complex workflows. The Workflow Description Language (WDL) is an open, community driven standard that is designed from the ground up as a human-readable and -writable way to express portable tasks and workflows.

In this session we’ll walk through the lifecycle of writing, sharing and discovering portable workflows in WDL. We’ll introduce the WDL syntax. You’ll learn how to write and run a workflow locally. We’ll demonstrate how to use EPAM’s Pipeline Builder tool to visualize WDL workflows. We’ll look at Dockstore, an open platform where one can publish, share and discover workflows. Finally we’ll put everything together and see how we can compose workflows we find on Dockstore together with our own additions to create new, more powerful workflows.

Prerequisites:

  • Mac or Linux computer
  • Java 8 installed and runnable from the command line
  • A workshop bundle downloaded from TBD

Community built analyses that run everywhere with bcbio

bcbio is community built analyses for germline and somatic variant calling, RNA-seq, single cell, smallRNA and ChIP-seq. This workshop will focus on using bcbio to run analysis pipelines in heterogenous environments -- local machines, HPC, cloud providers and commercial services (and also hopefully Galaxy). Attendees will learn how to practically run their analyses on their platform of choice, while discussing how the community can contribute to building, sharing and maintaining workflows across multiple platforms. Recent presentations of bcbio show some of the topics we plan to cover.

Introduction to Common Workflow Language

Common Workflow Language (http://commonwl.org) is a standard for writing portable scientific workflows that can execute on a variety of compute environments and workflow systems.

  • What is CWL, history
  • Status of CWL implementations (Arvados, Toil, Rabix, Galaxy, Cromwell, cwtool, ...)
  • Wrapping bioinformatics tools in CWL
  • Connecting tools together into workflows
  • Best practices for writing portable workflows
  • Hands on "bring your own pipeline" session
  • Q&A on advanced topics based on audience

Prerequisites

  • Unix command line experience
  • Experience with Docker

Deploying (Galaxy and your) applications into clouds

This tutorial will have two parts. Part 1 will demonstrate how to use the all-new CloudLaunch service to launch and manage instances of Galaxy on the Cloud on multiple cloud providers. In part 2, we will cover the technical process of adding custom applications into CloudLaunch and making them available for launching on any supported cloud (AWS, Azure, GCE, OpenStack). This part will also cover use of the CloudLaunch API, enabling external applications to leverage CloudLaunch capabilities.

Prerequisites

  • Part 1: a laptop with WiFi and a modern browser
  • Part 2: Basic programming skills (Python, Angular 2/4/5 useful but not essential?)

Conda and Containers

This workshop is aimed at people with some desire to develop dependencies for tools (either Galaxy or Common Workflow Language (CWL) tools).

We believe the best practice for declaring dependencies for either Galaxy or the CWL is using Conda and Bioconda. Conda is a cross platform package manager that has minimal requirements to use which makes it ideal for HPC. It can also build isolated environments ideal for platforms like Galaxy or CWL implementations. The Bioconda project is a set of Conda recipes for bioinformatics. The BioContainers project builds best practice containers automatically for all Bioconda recipes, so building a BioConda recipe for a package allows the same binaries to be used by both Galaxy (inside or outside a container) and by any conformant CWL implementation.

We will go through the process of creating, testing, and publishing a Bioconda package and we will work through an example of connecting these packages to a real world tool. Participants will be able to work through the examples using either Galaxy or CWL tools.

Prerequisites

  • Some knowledge of tool development - either CWL or Galaxy.

Writing & Publishing Galaxy Tools

This session will walk developers and bioinformaticians through the process of taking a working script or application and turning it into a Galaxy tool. It will also cover the basics of using Planemo: a command-line utility to assist in building and publishing Galaxy tools. We will investigate wrapping, common parameters, tool linting, best practices, loading tools into Galaxy, citations, and publishing tools to Github and the Galaxy Tool Shed. Common tips and tricks will be discussed as well as insights from experienced tool developers.

Prerequisites:

  • A general knowledge of Galaxy (for example, you should be familiar with the material in Galaxy 101 or have attended Introduction to Galaxy).
  • Knowledge and comfort with the Unix/Linux command line interface and a text editor. If you don't know what commands like cd, mv, rm, mkdir, chmod, grep can do then you will struggle in this workshop.
  • A wi-fi enabled laptop with a modern web browser. Chrome or Firefox will work best.

Setting up a Galaxy instance as a service

In this workshop, you will learn what is important when you set up a Galaxy server from scratch, what are the pitfalls you might run into, how to interact with the potential users of the service you gonna offer, and how to make sure, the Galaxy instance you have set up is really used in the end. After a general introduction, several Galaxy installations are presented. The session will include some demonstrations and hands-on exercises. We will finish with a panel discussion, where we intend to discuss questions from the workshop participants.

Prerequisites:

  • Familiar with the Bioinformatics problems (and their solutions) that wet lab scientists run into.
  • Knowledge and comfort with the Unix/Linux command line interface and a text editor. If you don't know what cd, mv, rm, mkdir, chmod, grep and so on can do then you will struggle in this workshop.

Advanced customization of a Galaxy instance

Do you have your lab's Galaxy instance set up and configured but want to give it some more love without diving too deep into the code? This training will show you step by step how to modify some advanced but not complex parts of the installation. We will teach you how to:

  • modify the menu
  • prepare a custom tour
  • adjust the graphical interface
  • translate the UI labels to different language
  • set up a built-in user/group chat
  • write and activate interface webhooks

Prerequisites

  • Introduction to Galaxy admin: Setting up a Galaxy instance as a service, or equivalent experience
  • Knowledge and comfort with the Unix/Linux command line interface and a text editor. If you don't know what cd, mv, rm, mkdir, chmod, grep and so on can do then you will struggle in this workshop.
  • A wi-fi enabled laptop with a modern web browser. Google Chrome, Firefox and Safari will work best.

Advanced accelerated Galaxy admin

A compressed top level review of the advanced parts of the weeklong Galaxy Administrators Course delivered multiple times in the past 2 years. Given the size of the scope of this topic we will be explaining advanced concepts, pointing out resources and providing guidance, tips, and tricks rather than going through the exercises and into details.

  • Introduction to Galaxy admin: Setting up a Galaxy instance as a service, or equivalent experience
  • Knowledge and comfort with the Unix/Linux command line interface and a text editor. If you don't know what cd, mv, rm, mkdir, chmod, grep and so on can do then you will struggle in this workshop.

Administration of Galaxy Infrastructures with Puppet

Administering Galaxy infrastructures can be a daunting task. Configuration management allows one to implement infrastructure using code as opposed to checklists and HowTo guides in order to have reliable installations and the ability to replicate an installation.

Building off of the work started at the GCC2017 Hackathon, this training will introduce the idea of using puppet for configuration management of systems using existing puppet code and other open source tools available that augment puppet. This session requires no prior experience with puppet or other configuration management tools.

Prerequisites:

  • You have setup a Galaxy instance before… maybe.
  • Knowledge and comfort with the Unix/Linux command line interface and use of a text editor.
  • A CentOS 7 virtual machine that is setup has at a minimum the Infrastructure Server group installation type.
  • No Puppet, Chef, or Ansible experience is required.

Galaxy Interactive Environments

In this session you will get in-depth introduction to Interactive Environments (IE). You will learn how to setup and secure IE’s (Jupyter, RStudio, etc.) in a production Galaxy instance. Moreover, we will create an IE on-the-fly to get you started in creating your own Interactive Environments.In this session you will get an introduction to Interactive Environments (IE) as an easy and powerful way to integrate arbitrary interactive web services into Galaxy. We will demonstrate the IPython Galaxy Project and the general concept of IE’s.

Prerequisites:

  • Basic understanding of Galaxy from a developer point of view.
  • General knowledge about Docker
  • Knowledge and comfort with the Unix/Linux command line interface and a text editor. If you don't know what cd, mv, rm, mkdir, chmod, grep and so on can do then you will struggle in this workshop.
  • A wi-fi enabled laptop with a modern web browser. Google Chrome, Firefox and Safari will work best.

Adding Galaxy Workflows to a Tripal Website

Learn to integrate two popular and widely adopted open source GMOD tools: Tripal, a content management system for building community genome websites, and Galaxy, a web-based workflow engine for biological data analysis. The new Tripal-to-Galaxy bridge allows users of community databases to select pre-designed workflows for common analyses and upload their own data or utilize site data as input. We will also use new R Markdown wrapped Galaxy tools (Aurora Galaxy Tools) to construct HTML reports summarizing and visualizing the workflow output.

Prerequisites

  • Familiarity with Galaxy user interface and basic Galaxy server administration.

The Galaxy Docker Project

In this session you will learn the internals of the Docker Galaxy Image. We will show you tips and tricks on how to run the Galaxy Docker Image successfully in production, how to manage updates and how to bind the container to a cluster scheduler. Moreover, you will learn how to create your own Galaxy flavour mixing a variety of different tools and visualisations.

Prerequisites

  • Basic understanding of Galaxy from a developer point of view.
  • General knowledge about Docker
  • Familiarity with Unix command line and text editors"

The Galaxy Database Schema

Running a production Galaxy server, you some times end up in a situation, where you manually need to interact with the database, e.g. you want to extract usage information, which can not be gathered from the built in reports tool. Or, a more risky adventure: you need to change the state of a job to 'error'. For both cases, you require a good understanding of the Galaxy database schema. In this training session, you will learn some of the design concepts of the database, and how to extract (or if necessary change) information useful for a Galaxy admin. Experience maintaining a production Galaxy server and a basic knowledge of relational databases and SQL statements.

Prerequisites

  • Experience maintaining a production Galaxy server and a basic knowledge of relational databases and SQL statements

Galaxy Architecture

Want to know the big picture about what is going on inside Galaxy? This workshop will give participants a practical introduction to the Galaxy code base with a focus on changing those parts of Galaxy most often modified by local deployers and new contributors.

The workshop will include the following specific content:

  • A description of the various file and top-level directories in the Galaxy code base.
  • An overview of important Python modules - including models, tools, jobs, workflows, visualisations, and API controllers.
  • An overview of important Python objects and concepts in the Galaxy codebase - including the Galaxy transaction object ("trans"), the application object ("app") , and the configuration object ("config").
  • An overview of various plugin extension points. - An overview of important JavaScript modules that power the front-end.
  • An overview of important JavaScript concepts used by Galaxy - in particular Backbone MVC, Webpack, ES6, and Vue.
  • An overview of the client build system used to generate compressed JavaScript, cascading stylesheets, and other static web assets.
  • A demonstration of a complete start-to-finish modification of Galaxy - including forking the project on Github, modifying files, running the tests, checking style guidelines, committing the change, pushing it back to your local Github fork, and opening a pull request.
  • A brief description of other projects in the Galaxy ecosystem (CloudMan, the Tool Shed, Ephemeris, bioblend, docker-galaxy-stable, Pulsar, and Planemo).

Prerequisites:

  • Your interest.

Nominations in Progress

Genome-wide sgRNA screen analysis using GALAXY

Identify sgRNAs enriched in a screen with specific treatment. Raw data processing, mapping, MAGeCK-VISPR.

Command line workflow management systems: Snakemake, Cluster Flow, Bpipe

How to use workflow management systems that are designed for the command line, such as Snakemake, Cluster Flow, and Bpipe.

Prerequisites

  • Linux command line experience

Analysis of scRNA sequencing data

How to analyze single cell RNA sequencing data, e.g. in cellranger or with dedicated R packages