Mollusk
Write tests for Solana programs in Rust using Mollusk.
Mollusk is a lightweight test harness for Solana programs. It provides a simple interface for testing Solana program executions in a minified Solana Virtual Machine (SVM) environment.
It does not create any semblance of a validator runtime, but instead provisions a program execution pipeline directly from lower-level SVM components.
In summary, the main processor - process_instruction
- creates minified
instances of Agave's program cache, transaction context, and invoke context. It
uses these components to directly execute the provided program's ELF using the
BPF Loader.
Because it does not use AccountsDB, Bank, or any other large Agave components, the harness is exceptionally fast. However, it does require the user to provide an explicit list of accounts to use, since it has nowhere to load them from.
The test environment can be further configured by adjusting the compute budget,
feature set, or sysvars. These configurations are stored directly on the test
harness (the Mollusk
struct), but can be manipulated through a handful of
helpers.
Four main API methods are offered:
process_instruction
: Process an instruction and return the result.process_and_validate_instruction
: Process an instruction and perform a series of checks on the result, panicking if any checks fail.process_instruction_chain
: Process a chain of instructions and return the result.process_and_validate_instruction_chain
: Process a chain of instructions and perform a series of checks on each result, panicking if any checks fail.
Single Instructions
Both process_instruction
and process_and_validate_instruction
deal with
single instructions. The former simply processes the instruction and returns the
result, while the latter processes the instruction and then performs a series of
checks on the result. In both cases, the result is also returned.
To apply checks via process_and_validate_instruction
, developers can use the
Check
enum, which provides a set of common checks.
Note: Mollusk::default()
will create a new Mollusk
instance without adding
any provided BPF programs. It will still contain a subset of the default builtin
programs. For more builtin programs, you can add them yourself or use the
all-builtins
feature.
Instruction Chains
Both process_instruction_chain
and process_and_validate_instruction_chain
deal with chains of instructions. The former processes each instruction in the
chain and returns the final result, while the latter processes each instruction
in the chain and then performs a series of checks on each result. In both cases,
the final result is also returned.
Just like with process_and_validate_instruction
, developers can use the
Check
enum to apply checks via process_and_validate_instruction_chain
.
Notice that process_and_validate_instruction_chain
takes a slice of tuples,
where each tuple contains an instruction and a slice of checks. This allows the
developer to apply specific checks to each instruction in the chain. The result
returned by the method is the final result of the last instruction in the chain.
It's important to understand that instruction chains should not be considered equivalent to Solana transactions. Mollusk does not impose constraints on instruction chains, such as loaded account keys or size. Developers should recognize that instruction chains are primarily used for testing program execution.
Benchmarking Compute Units
The Mollusk Compute Unit Bencher can be used to benchmark the compute unit usage of Solana programs. It provides a simple API for developers to write benchmarks for their programs, which can be checked while making changes to the program.
A markdown file is generated, which captures all of the compute unit benchmarks. If a benchmark has a previous value, the delta is also recorded. This can be useful for developers to check the implications of changes to the program on compute unit usage.
The must_pass
argument can be provided to trigger a panic if any defined
benchmark tests do not pass. out_dir
specifies the directory where the
markdown file will be written.
Developers can invoke this benchmark test with cargo bench
. They may need to
add a bench to the project's Cargo.toml
.
The markdown file will contain entries according to the defined benchmarks.