Galaxy Test Instance
The Galaxy Test instance is available at https://test.galaxyproject.org/. Test is the Beta site for the Galaxy Main instance. Main is the primary free public Galaxy instance. Test is also free and public, and is a testbed where tools and framework functions are functionally and experimentally reviewed.
Test changes frequently and we don't actually guarantee things will work or that data/histories/workflows/visualizations will be persistent (even when saved in an account). Some new tools on Test will eventually be promoted to Main, but others will not. Backwards and forwards compatibility of data and tools on Test with data and tools on Main, in a Distribution, or from the Tool Shed should not be expected.
If you get an error on Test, there are many possible reasons for it. You might try running again, to make sure that some other change on test didn't interfere (updates occur frequently). If you do find out what went wrong (a specific bug in the tool wrapper/config) please do let us know.
Information about Test
The Learn pages include information on how to use Test, Main, and most other Galaxy instances. Also see:
See Choices for more on other choices for using and running Galaxy.
Job resubmission to Stampede
Certain tools will be automatically "resubmitted" to Stampede (see Job execution on Stampede for more about Stampede) if they initially run on Galaxy's local cluster but exceed the walltime (run time limit). The walltime differs per tool and is calculated based on previous average runtimes of that tool:
Tools | |
---|---|
Tool | Walltime |
BWA | 3 hours, 41 minutes |
BWA-MEM | 4 hours, 55 minutes |
Bowtie | 2 hours, 35 minutes |
Tophat | 6 hours, 11 minutes |
Cufflinks | 4 hours, 5 minutes |
Cuffdiff | 8 hours, 11 minutes |
Cuffmerge | 1 hour, 6 minutes |
Legacy Tools | |
Map with BWA for Illumina | 4 hours, 54 minutes |
Map with Bowtie for Illumina | 2 hours, 18 minutes |
Tophat (version 1) | 6 hours, 26 minutes |
When a job is resubmitted you will see its state turn from running (yellow) back to gray (queued) and a blue message box will appear when the dataset is expanded explaining that the job has been resubmitted.
Our goal with the Stampede resubmission system is to provide a balance to Galaxy users: to allow those with relatively small jobs to run them quickly without a wait, but still be able to support larger scale analyses with a reasonable wait but higher job concurrency limits. See the User data and job quotas section below for more on concurrency limits.
If you know (due to previous runs of the tool using similar inputs and parameters) that your job will reach the walltime on the local cluster, you should directly submit it to Stampede to avoid the time wasted running to walltime on the Galaxy cluster.
Direct job execution on Stampede
Tools in the previous section can also be manually submitted directly to Stampede. This is a good idea if you know (or strongly suspect) that a tool will exceed the walltime on the local cluster. On the form for these tools, a Job Resource Parameters parameter is available that, if selected, will display a Compute Resources selection parameter. The options for this parameter are:
Compute resource choices | |
---|---|
Galaxy cluster (default): | variable walltime, 6 cores, 32 GB memory, no/short wait |
TACC Stampede: | 48 hour walltime, 16 cores, 32 GB memory, variable wait |
Galaxy cluster test/development: | 30 minute walltime, 2 cores, 16 GB, no/short wait |
TACC Stampede test/development: | 1 hour walltime, 16 cores, 32 GB, variable wait |
Beta: De novo assembly on Blacklight
Trinity and SPAdes can be run on the Blacklight supercomputer at PSC via Galaxy Test.
As with tools that run on Stampede, on the form for these tools, a Job Resource Parameters parameter is available that, if selected, will display a Compute Resources selection parameter. The options for this parameter are:
Compute resource choices | |
---|---|
PSC Blacklight 16 core (default): | 48 hour walltime, 16 cores, 128 GB memory, variable wait |
PSC Blacklight 64 core: | 48 hour walltime, 64 cores, 512 GB memory, variable wait |
PSC Blacklight 256 core: | 48 hour walltime, 256 cores, 2 TB memory, variable wait |
PSC Blacklight test/development: | 30 minute walltime, 16 cores, 64 GB memory, no/short wait |
User data and job quotas
Overall quotas | |
---|---|
Maximum total accounts per user: | 1 registered/unregistered |
Maximum total user data on server: | 10GB for registered users 1GB for unregistered users |
Maximum concurrent jobs: | 4 for registered users 1 for unregistered users |
Some tools or job destinations have stricter job concurrency limits than the overall limits above. These tools include all of the tools that can be run on Stampede (listed above), and some additional tools. These limits are:
Per-resource job concurrency quotas | |
---|---|
Increased memory tools: | 1 (registered or unregistered) |
Galaxy cluster: | 2 registered, unregistered not allowed |
TACC Stampede: | 3 registered, unregistered not allowed |
Galaxy cluster test/development: | 1 registered, unregistered not allowed |
TACC Stampede test/development: | 1 registered, unregistered not allowed |
"Increased memory tools" refers to a set of tools that are granted additional memory over the 8 GB default.
**[Terms and Conditions](https://test.galaxyproject.org/static/terms.html)**: *Attempts to subvert these limits by creating multiple accounts or through any other method may result in termination of all associated accounts.*
Monitoring data use
Exceeding quotas will prevent new jobs from running, but Galaxy users can monitor and manage datasets in several ways:
- Percent of quota limit used by a user account is noted in the top right corner of the Galaxy interface within a bar icon.
- Exact total user data size and quota limit is noted on the page: User → Preferences (top menu bar).
- Size of individual histories is listed on the page: Options → Saved Histories (left history pane's menu).
- Size of individual datasets can be found within a dataset's expanded box either written directly under the dataset's name and/or by viewing the dataset's Details (click on View Details icon ).
Test server user interface:
User Account Quotas
How will I know if my quota has been exceeded?
Data
A
Jobs
Any jobs queued after the limit of 4 has been met will remain in the status "waiting to run" (colored grey) until job quota is met.
When can I run jobs on the Test instance again?
Data
Reduce the amount of data in your account. Start with removing any Histories that are no longer needed on the Options → Saved Histories form and the option Delete Permanently. More information about data is covered on the Managing Datasets wiki.
Jobs
To gain access to the test server again, no user action is needed. When your existing jobs complete and number less than 4, new jobs will be added to the queue to execute (maximum of 4 concurrent).
Special Use
Testing
If you are involved with scientific or functional testing of a new Galaxy tool, please send an email to [Galaxy-Bugs](mailto:galaxy-bugs AT lists DOT galaxyproject DOT org) to discuss options for data resources and a potential temporary quota increase.
Developers and Administrators
New Admin features have been added and more are planned for in the near term. Details explained in: Disk Quotas. Feedback about the implementation of quota management is welcomed at the Galaxy-Dev mailing list.
Quotas at the Galaxy Main public instance
See Main.