Skip to content

Main Functionalities

LabKey Server provides a suite of tools for data management, collaboration, and automation, making it a powerful platform for research and laboratory environments. Each section below summarizes its respective functionality and provides links to official documentation for further details.

Visualizations

LabKey Server offers a variety of tools to visualize, analyze, and display data, transforming complex datasets into insightful visual representations.

Plot Editor

To create new visualizations from a data grid, access the plot editor by selecting Charts > Create Chart. This feature allows the creation of various chart types, including:

  • Bar Charts – Compare measurements of numeric values across categories.
  • Box Plots – Display the distribution of data based on a five-number summary.
  • Line Plots – Show trends over time or ordered categories.
  • Pie Charts – Illustrate proportions of a whole.
  • Scatter Plots – Examine relationships between two numeric variables.
  • Time Charts – Available for study datasets and queries to plot data over time.

Column Visualizations

For quick, column-specific visualizations within a dataset:

  • Click the column header and select a visualization option, such as Bar Chart, Box & Whisker, or Pie Chart.
  • These visualizations appear above the data grid and update dynamically with the underlying data and any applied filters.

Quick Charts

To rapidly assess data without selecting a specific visualization type:

  • Click a column header and choose Quick Chart.
  • LabKey Server generates an initial chart based on the data, which can be refined further using the plot editor.

Managing Visualizations

Saved visualizations are dynamically updated to reflect changes in the underlying data. Access these visualizations through:

  • The Reports or Charts dropdown menu on the associated data grid.
  • Direct links in the Data Views web part.

Additional information

For comprehensive guidance on creating and managing visualizations, refer to the official LabKey documentation.

Lists

LabKey lists are flexible, user-defined tables that can contain as many columns as needed. They can serve various purposes, such as:

  • Editable Data Storage: Users can manually enter, update, and manage data.
  • Searchable Data Resource: Lists can be searched, filtered, sorted, and exported.
  • Data Analysis Tool

Lists are designed to handle diverse data needs, from simple spreadsheets to complex datasets, and can be linked via lookups and joins to integrate data from multiple sources. They can also be indexed for search purposes, including optional indexing of attachments added to fields.

Additional information

For information on creating, modifying, or updating a list, please refer to the FAQs. For comprehensive guide, refer to the official LabKey documentation.

Sample Manager

LabKey Sample Manager is a web-based application designed to help laboratories to efficiently track, organize, and manage samples, their lineage, associated metadata, and storage. It provides tools for sample registration, source management, storage tracking, data import, and more. Detailed information about Sample Manager can be found in the next chapter.

Collaboration

LabKey Server provides a suite of collaboration tools that enhance communication and streamline data management within a project. These tools are designed to support teamwork, facilitate knowledge sharing, and ensure secure access through role and group-based permissions.

For complete details on each tool, please refer to the official LabKey documentation using the links provided below.

Collaboration Tools:

  • File Management: Securely share and manage files using a centralized repository.

  • Message Boards: Facilitate team discussions and project communication.

  • Issue Tracking: Address complex problems that require input from multiple team members.

  • Wiki Pages: Document and contextualize projects for easy reference.

Additional information

For an overview of all collaboration tools in LabKey Server, visit the official LabKey documentation.

Data Protection

Levels of Protection

These classifications define the level of privacy, security, and control that must be applied to different types of data based on their sensitivity.

  1. Not Protected Health Information (PHI): This is the default level of protection when creating data. It refers to data that is not considered PHI and is not subject to the same privacy and security rules. It could be public information or data that does not contain any health-related identifiers.
  2. Limited PHI: This refers to data that contains some health-related information but may have some identifiers removed or altered. It’s still considered PHI, but there are fewer privacy restrictions or protections than full PHI.
  3. Full PHI: This is data that contains complete health-related information that can identify an individual. It includes details such as name, address, medical records, etc., and is highly protected by laws.
  4. Restricted: This refers to the most sensitive types of data, such as mental health information, HIV status, or other information that is particularly sensitive. These data elements are subject to the strictest controls due to their sensitive nature.

Handling PHI

The list columns (list fields) can be optionally tagged as containing some level of PHI. PHI field tagging will allow you to include or remove the columns from a folder export, based on the level of protection.
PHI fields can be tagged with one of the protection levels described above: Non Protected Health Information (PHI), Limited PHI, Full PHI and Restricted.

Setting a field as PHI

  • Step 1: On the header of the table click “Design”.
  • Step 2: Click “Edit Design”.
  • Step 3: In the section “List Fields” select the corresponding field.
  • Step 4: Select the “Advanced” tab and from the drop-down box choose the “PHI Level”.

    Set PHI

Removing PHI fields from folder exports

  • Step 1: On the top right-hand side of the page select the menu button.
  • Step 2: Then select Folder —> Management.
  • Step 3: Select and open the “Export” tab.
  • Step 4: Deactivate the check-box “Include PHI Columns”: to remove any field marked as PHI or activate it and select the level of PHI to include all the fields tagged with this level or below.
  • Step 5: Select the objects you wish to export and hit the button “Export”.

    Export PHI

SQL Queries

LabKey Server offers robust tools for creating and managing SQL queries, enabling users to:

  • Create filtered data views
  • Join data across multiple tables
  • Group data and compute aggregates
  • Add calculated columns
  • Customize data display using query metadata
  • Develop staging tables for report generation

Special Features:

  • Lookups: Utilize an intuitive syntax (Table.ForeignKey.FieldFromForeignTable) to link tables without traditional JOIN statements.
  • Parameterized SQL Statements: Incorporate parameters into SQL queries, allowing dynamic data retrieval based on user input.

Processing of SQL Queries

LabKey Server parses each query, performs security checks, and translates it into the native SQL dialect of the underlying database before execution.

Additional information

For comprehensive guidance on creating and managing SQL queries, refer to the official LabKey documentation.

Custom Modules

LabKey Server allows users to create custom modules to extend its functionality. These modules package resources like R reports, SQL queries, HTML pages, and Java code into a standardized directory structure for easy deployment.

File-based modules, which do not include Java code, can be deployed without compilation, enabling direct testing and deployment—often without restarting the server.

Additional information

To explore more about custom modules, please refer to the official LabKey documentation.

Extract-Transform-Load (ETL)

LabKey Server’s Extract-Transform-Load (ETL) functionality enables users to automate data workflows by:

  1. Extracting data from a source database.
  2. Transforming the data as needed.
  3. Loading it into a target database.

This process facilitates tasks such as integrating data from multiple sources, migrating between database schemas, normalizing data, and scheduling regular data synchronizations.

ETL Definition Options

There are two primary methods to define ETLs in LabKey Server:

  • Module ETLs: These are included in custom modules, making them easier to version control and deploy across multiple systems (e.g., staging and production environments).

  • User Interface (UI) ETLs: Defined directly within the LabKey Server UI, these ETLs are convenient for creating processes on a running server but are specific to the server where they’re created.

A hybrid approach can also be employed, where a UI-defined ‘wrapper’ ETL calls a set of module-defined component ETLs, offering a blend of both methods’ advantages.

ETL Development Process

Developing an ETL in LabKey Server typically involves:

  1. Identifying the Source Data: Understanding its structure, location, and access requirements.
  2. Creating the Target Table: Defining where and how the transformed data will be stored.
  3. Developing the ETL Process: Performing necessary transformations and loading data into the target table.
  4. Scheduling the ETL: Setting up the ETL to run at specified intervals, if needed.
  5. Implementing Additional Options: Such as timestamp-based filtering or handling data deletions from the source.

Additional information

For a comprehensive guide on ETL processes in LabKey Server, refer to the official LabKey documentation.

LabKey Client APIs

LabKey Server provides a range of client APIs that allow programmatic interaction with its data and services. These APIs enable users to automate tasks, query and manipulate data, and integrate external applications with LabKey. All APIs follow LabKey’s security model, ensuring user-based access control and auditing.

Supported APIs

  • JavaScript API – Build custom UI components and automate data interactions within LabKey.
  • Java API – Develop Java applications that interact with LabKey data and services.
  • Python API – Query, insert, and update LabKey data using Python scripts.
  • R API – Integrate LabKey data with R for statistical analysis.
  • SAS API – Perform data processing and analysis with SAS.
  • Perl API – Interact with LabKey using Perl scripting.
  • HTTP Interface – Access LabKey programmatically via HTTP requests.

Accessing LabKey APIs from Your Local Environment

To begin using any of the supported LabKey APIs from your local machine, follow these steps:

Step 1. Install the Required Client Library

First, install the appropriate client library for your programming environment:

  • Python:
    Install the official LabKey package via pip:
    pip install labkey
    
  • R:
    Install the “Rlabkey” package using the following command from the R console:
    install.packages("Rlabkey")
    
  • Other languages (Java, Perl, SAS, JavaScript, HTTP):
    Please refer to the LabKey Client APIs Documentation for setup instructions specific to each language.

Step 2. Configure Authentication Using a .netrc File

LabKey client libraries can automatically pick up credentials from a .netrc (UNIX/Mac) or _netrc (Windows) file, eliminating the need to embed passwords or API keys in your scripts.

  • On macOS, Linux, or UNIX:

    1. In your home directory, create a text file named .netrc.
    2. Set restrictive permissions so only you can access it: chmod 600 ~/.netrc
  • On Windows:

    1. In your %HOME% directory (e.g., C:\Users\), create a text file named _netrc (no extension!).
    2. Ensure it’s a flat text file—not a Word or RTF file, and confirm it doesn’t carry a hidden .txt extension.
    3. Set your HOME environment variable to this folder.

File format

Each machine entry in the .netrc (or _netrc) file must include three fields: machine, login, and password. These fields must be separated by whitespace or commas. Separate multiple machine entries with a blank line.

Example:

machine myserver.labkey.org
login apikey
password YOUR_API_KEY_HERE
  • machine
    Specify only the hostname of your LabKey server. Do not include https://, port numbers, or folder paths. For example: labkey.scicore.unibas.ch

  • login
    Must be the literal string apikey. This tells LabKey to authenticate using an API key (recommended) instead of a username/password.

  • password
    Provide your full API key token, exactly as it was generated from your LabKey account. Learn more about generating an API key here.

Step 3. Run Your Script

Once installed and authenticated, you can begin accessing LabKey resources via the respective API. The libraries will automatically detect and use your .netrc credentials.

Script Examples

Accessing List Data with the Python API

The following example demonstrates how to retrieve data from the “Iris” list located in the “lists” schema of the project “Examples/Iris dataset” using the select_rows() method from the LabKey Python API:

from labkey.api_wrapper import APIWrapper

# Establish connection to the LabKey Server project
server_context = APIWrapper(
    server='labkey.scicore.unibas.ch',
    folder_path='Examples/Iris dataset',
    container='labkey',
    use_ssl=True
)

# Query the 'Iris' list in the 'lists' schema
result = server_context.query.select_rows(schema_name='lists', query_name='Iris')

# Print the 'SepalLength' of the first row
print(result['rows'][0]['SepalLength'])

Filtering List Rows Using QueryFilter in the Python API

The following example shows how to apply a filter to a list query using the QueryFilter class to return only rows where the SepalLength is between 5 and 6:

from labkey.api_wrapper import APIWrapper

# For convenience, load QueryFilter explicitly
from labkey.query import QueryFilter

# Establish connection to the LabKey Server project
server_context = APIWrapper('labkey.scicore.unibas.ch', 'Examples/Iris dataset', 'labkey', use_ssl=True)

# Define filter to select rows where SepalLength is between 5 and 6
filters = [
    QueryFilter('SepalLength', "5, 6", QueryFilter.Types.BETWEEN)
]

# Query the 'Iris' list in the 'lists' schema with the filter
results = server_context.query.select_rows(
    schema_name='lists',
    filter_array=filters,
    query_name='Iris'
)['rows']

# Print the number of rows and their contents
print(f'Number of results found: {len(results)}')
print('\nAll results as key-value pairs:')
for result in results:
    for key, value in result.items():
        print(f'{key}: {value}')

For more sample Python scripts, see the LabKey Python API Samples on GitHub.

Accessing List Data with the R API

The following example demonstrates how to retrieve data from the “Iris” list located in the “lists” schema of the project “Examples/Iris dataset” using the labkey.selectRows() function:

library(Rlabkey)

rows <- labkey.selectRows(
baseUrl = "https://labkey.scicore.unibas.ch/labkey",
folderPath = "Examples/Iris dataset",
schemaName = "lists",
queryName = "Iris"
)

print(rows)

Additional information

You can find usage examples and documentation for each API in the LabKey Client APIs Documentation.

Trigger Scripts

Trigger scripts in LabKey Server are JavaScript functions that execute automatically in response to data modifications—such as insertions, updates, or deletions—on specific database tables or queries. These scripts run within the LabKey Server environment using the Rhino JavaScript engine.

Typical Uses:

  • Data Validation – Ensure incoming data meets predefined criteria before acceptance.
  • Data Transformation – Modify or enrich data during the insertion process.
  • Cross-Table Operations – Update related tables based on changes in the primary table.
  • External Integrations – Trigger external APIs or send notifications upon data changes.

Implementation Overview:

  1. Location – Trigger scripts are stored within a module’s queries directory, following a structure based on the schema and table names.
  2. Activation – Once the module containing the trigger script is deployed, it must be enabled in the relevant project or folder.
  3. Execution Context – Scripts run with the permissions of the user initiating the data modification, ensuring security and appropriate access control.

Additional information

For comprehensive guidance on creating and managing trigger scripts, refer to the official LabKey documentation.

Script Pipeline: Running Scripts in Sequence

LabKey Server’s script pipeline enables the automation of data processing by executing scripts and commands in a defined sequence, where the output of one script serves as the input for the next. This approach simplifies workflows, reduces errors, standardizes analyses, and ensures tracking of inputs, script versions, and outputs.

Supported Scripting Languages:

  • R
  • Perl
  • Python

Info

JavaScript is not currently supported as a pipeline task language, though pipelines can be invoked by external JavaScript clients.

Example Pipeline Job:

  1. Pass raw data to an R script for initial processing.
  2. Process the results with a Perl script.
  3. Insert the processed data into an assay database.

Benefits of Using Script Pipelines:

  • Simplification – Automates complex workflows, reducing manual intervention.
  • Standardization – Ensures consistent application of data processing protocols.
  • Reproducibility – Facilitates the replication of analyses for verification and validation.
  • Tracking – Maintains records of inputs, script versions, and outputs for auditing and troubleshooting.

Additional information

For comprehensive guidance on setting up and managing script pipelines, refer to the official LabKey documentation.